Jan 14 01:10:01.466238 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:10:01.466270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:10:01.466284 kernel: BIOS-provided physical RAM map: Jan 14 01:10:01.466292 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:10:01.466299 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 01:10:01.466307 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 14 01:10:01.466317 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 14 01:10:01.466325 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 14 01:10:01.466332 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 14 01:10:01.466344 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 01:10:01.466352 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 01:10:01.466359 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 01:10:01.466367 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 01:10:01.466375 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 14 01:10:01.466384 kernel: NX (Execute Disable) protection: active Jan 14 01:10:01.466397 kernel: APIC: Static calls initialized Jan 14 01:10:01.466405 kernel: efi: EFI v2.7 by Microsoft Jan 14 01:10:01.466413 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e99f698 RNG=0x3ffd2018 Jan 14 01:10:01.466422 kernel: random: crng init done Jan 14 01:10:01.466430 kernel: secureboot: Secure boot disabled Jan 14 01:10:01.466439 kernel: SMBIOS 3.1.0 present. Jan 14 01:10:01.466448 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 14 01:10:01.466457 kernel: DMI: Memory slots populated: 2/2 Jan 14 01:10:01.466467 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 01:10:01.466475 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 14 01:10:01.466486 kernel: Hyper-V: Nested features: 0x3e0101 Jan 14 01:10:01.466494 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 01:10:01.466501 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 01:10:01.466509 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 01:10:01.466517 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 01:10:01.466524 kernel: tsc: Detected 2300.000 MHz processor Jan 14 01:10:01.466531 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:10:01.466540 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:10:01.466548 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 14 01:10:01.466559 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 01:10:01.466568 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:10:01.466577 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 14 01:10:01.466585 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 14 01:10:01.466594 kernel: Using GB pages for direct mapping Jan 14 01:10:01.466603 kernel: ACPI: Early table checksum verification disabled Jan 14 01:10:01.466617 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 01:10:01.466626 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:10:01.466636 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:10:01.466645 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 14 01:10:01.466654 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 01:10:01.466664 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:10:01.466675 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:10:01.466684 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:10:01.466693 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 14 01:10:01.466703 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 14 01:10:01.466712 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:10:01.466722 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 01:10:01.466733 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 14 01:10:01.466742 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 01:10:01.466751 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 01:10:01.466761 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 01:10:01.466770 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 01:10:01.466779 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 01:10:01.466788 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 14 01:10:01.466799 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 01:10:01.466809 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 14 01:10:01.466818 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 14 01:10:01.466828 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 14 01:10:01.466837 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 14 01:10:01.466846 kernel: Zone ranges: Jan 14 01:10:01.466856 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:10:01.466867 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 01:10:01.466876 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 01:10:01.466886 kernel: Device empty Jan 14 01:10:01.466895 kernel: Movable zone start for each node Jan 14 01:10:01.466904 kernel: Early memory node ranges Jan 14 01:10:01.466913 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 01:10:01.466922 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 14 01:10:01.466933 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 14 01:10:01.466942 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 01:10:01.466951 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 01:10:01.466959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 01:10:01.466968 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:10:01.466998 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 01:10:01.467008 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 14 01:10:01.467019 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 14 01:10:01.467028 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 01:10:01.467038 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 01:10:01.467047 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:10:01.467057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:10:01.467066 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:10:01.467074 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 01:10:01.467084 kernel: TSC deadline timer available Jan 14 01:10:01.467095 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:10:01.467104 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:10:01.467113 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:10:01.467122 kernel: CPU topo: Max. threads per core: 2 Jan 14 01:10:01.467132 kernel: CPU topo: Num. cores per package: 1 Jan 14 01:10:01.467141 kernel: CPU topo: Num. threads per package: 2 Jan 14 01:10:01.467150 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 01:10:01.467162 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 01:10:01.467172 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 01:10:01.467181 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:10:01.467191 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 01:10:01.467201 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 01:10:01.467210 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 01:10:01.467219 kernel: pcpu-alloc: [0] 0 1 Jan 14 01:10:01.467230 kernel: Hyper-V: PV spinlocks enabled Jan 14 01:10:01.467240 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:10:01.467251 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:10:01.467261 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 01:10:01.467270 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:10:01.467280 kernel: Fallback order for Node 0: 0 Jan 14 01:10:01.467291 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 14 01:10:01.467300 kernel: Policy zone: Normal Jan 14 01:10:01.467309 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:10:01.467319 kernel: software IO TLB: area num 2. Jan 14 01:10:01.467328 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 01:10:01.467337 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:10:01.467347 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:10:01.467356 kernel: Dynamic Preempt: voluntary Jan 14 01:10:01.467367 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:10:01.467378 kernel: rcu: RCU event tracing is enabled. Jan 14 01:10:01.467395 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 01:10:01.467407 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:10:01.467417 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:10:01.467427 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:10:01.467438 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:10:01.467448 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 01:10:01.467458 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:10:01.467470 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:10:01.467480 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:10:01.467489 kernel: Using NULL legacy PIC Jan 14 01:10:01.467499 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 01:10:01.467510 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:10:01.467520 kernel: Console: colour dummy device 80x25 Jan 14 01:10:01.467530 kernel: printk: legacy console [tty1] enabled Jan 14 01:10:01.467540 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:10:01.467550 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 14 01:10:01.467560 kernel: ACPI: Core revision 20240827 Jan 14 01:10:01.467570 kernel: Failed to register legacy timer interrupt Jan 14 01:10:01.467582 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:10:01.467592 kernel: x2apic enabled Jan 14 01:10:01.467602 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:10:01.467612 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 14 01:10:01.467622 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 01:10:01.467632 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 14 01:10:01.467643 kernel: Hyper-V: Using IPI hypercalls Jan 14 01:10:01.467654 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 01:10:01.467664 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 01:10:01.467675 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 01:10:01.467684 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 01:10:01.467694 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 01:10:01.467703 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 01:10:01.467714 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 14 01:10:01.467726 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jan 14 01:10:01.467736 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:10:01.467746 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 01:10:01.467757 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 01:10:01.467767 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:10:01.467777 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:10:01.467786 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:10:01.467796 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 01:10:01.467808 kernel: RETBleed: Vulnerable Jan 14 01:10:01.467817 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:10:01.467827 kernel: active return thunk: its_return_thunk Jan 14 01:10:01.467837 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 01:10:01.467846 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:10:01.467856 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:10:01.467866 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:10:01.467876 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 01:10:01.467886 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 01:10:01.467895 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 01:10:01.467907 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 14 01:10:01.467917 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 14 01:10:01.467927 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 14 01:10:01.467936 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:10:01.467946 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 01:10:01.467956 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 01:10:01.467966 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 01:10:01.467995 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 14 01:10:01.468005 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 14 01:10:01.468015 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 14 01:10:01.468025 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 14 01:10:01.468037 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:10:01.468047 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:10:01.468057 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:10:01.468066 kernel: landlock: Up and running. Jan 14 01:10:01.468076 kernel: SELinux: Initializing. Jan 14 01:10:01.468086 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:10:01.468096 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:10:01.468106 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 14 01:10:01.468115 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 14 01:10:01.468126 kernel: signal: max sigframe size: 11952 Jan 14 01:10:01.468138 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:10:01.468148 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:10:01.468159 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:10:01.468170 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:10:01.468180 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:10:01.468190 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:10:01.468200 kernel: .... node #0, CPUs: #1 Jan 14 01:10:01.468213 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:10:01.468223 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 14 01:10:01.468234 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 283604K reserved, 0K cma-reserved) Jan 14 01:10:01.468245 kernel: devtmpfs: initialized Jan 14 01:10:01.468255 kernel: x86/mm: Memory block size: 128MB Jan 14 01:10:01.468265 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 01:10:01.468276 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:10:01.468288 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 01:10:01.468298 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:10:01.468309 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:10:01.468319 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:10:01.468329 kernel: audit: type=2000 audit(1768352995.111:1): state=initialized audit_enabled=0 res=1 Jan 14 01:10:01.468339 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:10:01.468349 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:10:01.468362 kernel: cpuidle: using governor menu Jan 14 01:10:01.468372 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:10:01.468382 kernel: dca service started, version 1.12.1 Jan 14 01:10:01.468392 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 14 01:10:01.468403 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 14 01:10:01.468413 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:10:01.468423 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:10:01.468435 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:10:01.468445 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:10:01.468458 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:10:01.468469 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:10:01.468479 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:10:01.468488 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:10:01.468499 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:10:01.468510 kernel: ACPI: Interpreter enabled Jan 14 01:10:01.468521 kernel: ACPI: PM: (supports S0 S5) Jan 14 01:10:01.468531 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:10:01.468541 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:10:01.468551 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 01:10:01.468561 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 01:10:01.468570 kernel: iommu: Default domain type: Translated Jan 14 01:10:01.468581 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:10:01.468592 kernel: efivars: Registered efivars operations Jan 14 01:10:01.468602 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:10:01.468612 kernel: PCI: System does not support PCI Jan 14 01:10:01.468622 kernel: vgaarb: loaded Jan 14 01:10:01.468633 kernel: clocksource: Switched to clocksource tsc-early Jan 14 01:10:01.468643 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:10:01.468653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:10:01.468665 kernel: pnp: PnP ACPI init Jan 14 01:10:01.468674 kernel: pnp: PnP ACPI: found 3 devices Jan 14 01:10:01.468685 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:10:01.468695 kernel: NET: Registered PF_INET protocol family Jan 14 01:10:01.468705 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 01:10:01.468716 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 01:10:01.468726 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:10:01.468738 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:10:01.468749 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 01:10:01.468759 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 01:10:01.468769 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:10:01.468779 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:10:01.468790 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:10:01.468800 kernel: NET: Registered PF_XDP protocol family Jan 14 01:10:01.468812 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:10:01.468822 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 01:10:01.468833 kernel: software IO TLB: mapped [mem 0x000000003a99f000-0x000000003e99f000] (64MB) Jan 14 01:10:01.468843 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 14 01:10:01.468854 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 14 01:10:01.468864 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 14 01:10:01.468874 kernel: clocksource: Switched to clocksource tsc Jan 14 01:10:01.468886 kernel: Initialise system trusted keyrings Jan 14 01:10:01.468896 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 01:10:01.468906 kernel: Key type asymmetric registered Jan 14 01:10:01.468916 kernel: Asymmetric key parser 'x509' registered Jan 14 01:10:01.468927 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:10:01.468936 kernel: io scheduler mq-deadline registered Jan 14 01:10:01.468946 kernel: io scheduler kyber registered Jan 14 01:10:01.468958 kernel: io scheduler bfq registered Jan 14 01:10:01.468969 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:10:01.468997 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:10:01.469006 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:10:01.469015 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 01:10:01.469024 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:10:01.469032 kernel: i8042: PNP: No PS/2 controller found. Jan 14 01:10:01.469204 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 01:10:01.469315 kernel: rtc_cmos 00:02: setting system clock to 2026-01-14T01:09:57 UTC (1768352997) Jan 14 01:10:01.469418 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 01:10:01.469429 kernel: intel_pstate: Intel P-state driver initializing Jan 14 01:10:01.469439 kernel: efifb: probing for efifb Jan 14 01:10:01.469449 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 01:10:01.469460 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 01:10:01.469470 kernel: efifb: scrolling: redraw Jan 14 01:10:01.469480 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 01:10:01.469490 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 01:10:01.469500 kernel: fb0: EFI VGA frame buffer device Jan 14 01:10:01.469510 kernel: pstore: Using crash dump compression: deflate Jan 14 01:10:01.469520 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 01:10:01.469531 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:10:01.469544 kernel: Segment Routing with IPv6 Jan 14 01:10:01.469553 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:10:01.469562 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:10:01.469572 kernel: Key type dns_resolver registered Jan 14 01:10:01.469582 kernel: IPI shorthand broadcast: enabled Jan 14 01:10:01.469592 kernel: sched_clock: Marking stable (2365038563, 124338060)->(2889497528, -400120905) Jan 14 01:10:01.469602 kernel: registered taskstats version 1 Jan 14 01:10:01.469614 kernel: Loading compiled-in X.509 certificates Jan 14 01:10:01.469624 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:10:01.469633 kernel: Demotion targets for Node 0: null Jan 14 01:10:01.469643 kernel: Key type .fscrypt registered Jan 14 01:10:01.469652 kernel: Key type fscrypt-provisioning registered Jan 14 01:10:01.469661 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:10:01.469671 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:10:01.469683 kernel: ima: No architecture policies found Jan 14 01:10:01.469693 kernel: clk: Disabling unused clocks Jan 14 01:10:01.469703 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:10:01.469713 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:10:01.469723 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:10:01.469732 kernel: Run /init as init process Jan 14 01:10:01.469742 kernel: with arguments: Jan 14 01:10:01.469753 kernel: /init Jan 14 01:10:01.469762 kernel: with environment: Jan 14 01:10:01.469770 kernel: HOME=/ Jan 14 01:10:01.469780 kernel: TERM=linux Jan 14 01:10:01.469790 kernel: hv_vmbus: Vmbus version:5.3 Jan 14 01:10:01.469800 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 01:10:01.469811 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 01:10:01.469820 kernel: PTP clock support registered Jan 14 01:10:01.469831 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 01:10:01.469840 kernel: hv_vmbus: registering driver hv_utils Jan 14 01:10:01.469850 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 01:10:01.469859 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 01:10:01.469868 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 01:10:01.469878 kernel: SCSI subsystem initialized Jan 14 01:10:01.469888 kernel: hv_vmbus: registering driver hv_pci Jan 14 01:10:01.470072 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 14 01:10:01.470227 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 14 01:10:01.470358 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 14 01:10:01.470473 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 01:10:01.470619 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 14 01:10:01.470747 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 14 01:10:01.470864 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 01:10:01.471010 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 14 01:10:01.471022 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 01:10:01.471154 kernel: scsi host0: storvsc_host_t Jan 14 01:10:01.471292 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 14 01:10:01.471305 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 01:10:01.471315 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 01:10:01.471324 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 01:10:01.471442 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 01:10:01.471456 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 01:10:01.471469 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 01:10:01.471582 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 14 01:10:01.471713 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 14 01:10:01.471792 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 14 01:10:01.471801 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 14 01:10:01.471890 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 01:10:01.471898 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 01:10:01.472014 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 01:10:01.472023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:10:01.472029 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:10:01.472036 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:10:01.472042 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:10:01.472059 kernel: raid6: avx512x4 gen() 44503 MB/s Jan 14 01:10:01.472067 kernel: raid6: avx512x2 gen() 43456 MB/s Jan 14 01:10:01.472074 kernel: raid6: avx512x1 gen() 25261 MB/s Jan 14 01:10:01.472080 kernel: raid6: avx2x4 gen() 34404 MB/s Jan 14 01:10:01.472087 kernel: raid6: avx2x2 gen() 36319 MB/s Jan 14 01:10:01.472093 kernel: raid6: avx2x1 gen() 30020 MB/s Jan 14 01:10:01.472099 kernel: raid6: using algorithm avx512x4 gen() 44503 MB/s Jan 14 01:10:01.472107 kernel: raid6: .... xor() 7420 MB/s, rmw enabled Jan 14 01:10:01.472114 kernel: raid6: using avx512x2 recovery algorithm Jan 14 01:10:01.472121 kernel: xor: automatically using best checksumming function avx Jan 14 01:10:01.472127 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:10:01.472133 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (895) Jan 14 01:10:01.472140 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:10:01.472147 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:01.472155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 01:10:01.472161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:10:01.472168 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:10:01.472174 kernel: loop: module loaded Jan 14 01:10:01.472181 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:10:01.472187 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:10:01.472195 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:10:01.472206 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:10:01.472213 systemd[1]: Detected virtualization microsoft. Jan 14 01:10:01.472221 systemd[1]: Detected architecture x86-64. Jan 14 01:10:01.472228 systemd[1]: Running in initrd. Jan 14 01:10:01.472235 systemd[1]: No hostname configured, using default hostname. Jan 14 01:10:01.472242 systemd[1]: Hostname set to . Jan 14 01:10:01.472250 systemd[1]: Initializing machine ID from random generator. Jan 14 01:10:01.472257 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:10:01.472264 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:10:01.472270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:10:01.472277 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:10:01.472285 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:10:01.472293 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:10:01.472301 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:10:01.472307 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:10:01.472317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:10:01.472324 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:10:01.472331 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:10:01.472337 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:10:01.472344 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:10:01.472351 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:10:01.472357 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:10:01.472365 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:10:01.472372 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:10:01.472379 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:10:01.472386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:10:01.472393 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:10:01.472399 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:10:01.472406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:10:01.472414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:10:01.472421 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:10:01.472428 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:10:01.472435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:10:01.472442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:10:01.472449 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:10:01.472456 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:10:01.472464 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:10:01.472471 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:10:01.472478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:10:01.472485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:01.472493 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:10:01.472500 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:10:01.472507 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:10:01.472527 systemd-journald[1029]: Collecting audit messages is enabled. Jan 14 01:10:01.472546 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:10:01.472554 systemd-journald[1029]: Journal started Jan 14 01:10:01.472571 systemd-journald[1029]: Runtime Journal (/run/log/journal/ad25a7de541849619bd8df4102df39a0) is 8M, max 158.5M, 150.5M free. Jan 14 01:10:01.481989 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:10:01.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.485082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:10:01.488097 kernel: audit: type=1130 audit(1768353001.480:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.603053 systemd-tmpfiles[1044]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:10:01.605462 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:10:01.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.611985 kernel: audit: type=1130 audit(1768353001.604:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.616108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:10:01.627703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:10:01.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.635076 kernel: audit: type=1130 audit(1768353001.630:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.635169 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:10:01.648850 systemd-modules-load[1034]: Inserted module 'br_netfilter' Jan 14 01:10:01.649356 kernel: Bridge firewalling registered Jan 14 01:10:01.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.650379 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:10:01.654992 kernel: audit: type=1130 audit(1768353001.650:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.655487 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:10:01.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.659990 kernel: audit: type=1130 audit(1768353001.655:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.660876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:10:01.677198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:01.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.685243 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:10:01.686141 kernel: audit: type=1130 audit(1768353001.679:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.691990 kernel: audit: type=1130 audit(1768353001.688:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.693101 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:10:01.704000 audit: BPF prog-id=6 op=LOAD Jan 14 01:10:01.705848 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:10:01.708110 kernel: audit: type=1334 audit(1768353001.704:9): prog-id=6 op=LOAD Jan 14 01:10:01.721352 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:10:01.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.735006 kernel: audit: type=1130 audit(1768353001.728:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.737089 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:10:01.806038 systemd-resolved[1059]: Positive Trust Anchors: Jan 14 01:10:01.806400 systemd-resolved[1059]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:10:01.806404 systemd-resolved[1059]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:10:01.811894 dracut-cmdline[1071]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:10:01.806440 systemd-resolved[1059]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:10:01.874395 systemd-resolved[1059]: Defaulting to hostname 'linux'. Jan 14 01:10:01.875477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:10:01.887325 kernel: audit: type=1130 audit(1768353001.880:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:01.881131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:10:01.988992 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:10:02.087996 kernel: iscsi: registered transport (tcp) Jan 14 01:10:02.141287 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:10:02.141356 kernel: QLogic iSCSI HBA Driver Jan 14 01:10:02.192687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:10:02.216871 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:10:02.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.221422 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:10:02.254017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:10:02.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.257318 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:10:02.263092 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:10:02.293513 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:10:02.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.297000 audit: BPF prog-id=7 op=LOAD Jan 14 01:10:02.297000 audit: BPF prog-id=8 op=LOAD Jan 14 01:10:02.298610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:10:02.329576 systemd-udevd[1320]: Using default interface naming scheme 'v257'. Jan 14 01:10:02.342445 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:10:02.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.350514 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:10:02.359208 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:10:02.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.363000 audit: BPF prog-id=9 op=LOAD Jan 14 01:10:02.366008 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:10:02.375632 dracut-pre-trigger[1403]: rd.md=0: removing MD RAID activation Jan 14 01:10:02.403860 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:10:02.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.411092 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:10:02.418291 systemd-networkd[1409]: lo: Link UP Jan 14 01:10:02.418298 systemd-networkd[1409]: lo: Gained carrier Jan 14 01:10:02.420013 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:10:02.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.423272 systemd[1]: Reached target network.target - Network. Jan 14 01:10:02.456121 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:10:02.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.462857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:10:02.549777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:02.549898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:02.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.555645 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:02.563232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:02.576043 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 01:10:02.590071 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd1808a (unnamed net_device) (uninitialized): VF slot 1 added Jan 14 01:10:02.590459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:02.592949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:02.601042 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:10:02.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.609899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 01:10:02.609051 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:02.620501 systemd-networkd[1409]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:10:02.620509 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:10:02.621907 systemd-networkd[1409]: eth0: Link UP Jan 14 01:10:02.622059 systemd-networkd[1409]: eth0: Gained carrier Jan 14 01:10:02.622072 systemd-networkd[1409]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:10:02.640961 systemd-networkd[1409]: eth0: DHCPv4 address 10.200.4.37/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:10:02.643075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:02.648404 kernel: AES CTR mode by8 optimization enabled Jan 14 01:10:02.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:02.772004 kernel: nvme nvme0: using unchecked data buffer Jan 14 01:10:02.869232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 14 01:10:02.872784 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:10:02.974004 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 14 01:10:03.005904 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 14 01:10:03.031726 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 14 01:10:03.120185 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:10:03.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:03.121871 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:10:03.127046 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:10:03.128311 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:10:03.139421 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:10:03.178451 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:10:03.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:03.614325 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 14 01:10:03.614592 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 14 01:10:03.617278 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 14 01:10:03.618833 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 01:10:03.624134 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 14 01:10:03.628016 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 14 01:10:03.633141 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 14 01:10:03.635122 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 14 01:10:03.649919 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 01:10:03.650137 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 14 01:10:03.654015 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 14 01:10:03.673874 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 14 01:10:03.684987 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 14 01:10:03.686989 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd1808a eth0: VF registering: eth1 Jan 14 01:10:03.687161 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 14 01:10:03.692446 systemd-networkd[1409]: eth1: Interface name change detected, renamed to enP30832s1. Jan 14 01:10:03.696107 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 14 01:10:03.792999 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 14 01:10:03.796662 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:10:03.796965 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd1808a eth0: Data path switched to VF: enP30832s1 Jan 14 01:10:03.797319 systemd-networkd[1409]: enP30832s1: Link UP Jan 14 01:10:03.798282 systemd-networkd[1409]: enP30832s1: Gained carrier Jan 14 01:10:04.134164 systemd-networkd[1409]: eth0: Gained IPv6LL Jan 14 01:10:04.182520 disk-uuid[1590]: Warning: The kernel is still using the old partition table. Jan 14 01:10:04.182520 disk-uuid[1590]: The new table will be used at the next reboot or after you Jan 14 01:10:04.182520 disk-uuid[1590]: run partprobe(8) or kpartx(8) Jan 14 01:10:04.182520 disk-uuid[1590]: The operation has completed successfully. Jan 14 01:10:04.193829 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:10:04.193938 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:10:04.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:04.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:04.199935 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:10:04.262996 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1637) Jan 14 01:10:04.266034 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:04.266066 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:04.295371 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:10:04.295466 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:10:04.296374 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:10:04.302777 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:10:04.303194 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:04.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:04.307264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:10:05.363217 ignition[1656]: Ignition 2.24.0 Jan 14 01:10:05.363231 ignition[1656]: Stage: fetch-offline Jan 14 01:10:05.367273 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:10:05.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:05.363467 ignition[1656]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:05.371231 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:10:05.363477 ignition[1656]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:10:05.364651 ignition[1656]: parsed url from cmdline: "" Jan 14 01:10:05.364657 ignition[1656]: no config URL provided Jan 14 01:10:05.364664 ignition[1656]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:10:05.364679 ignition[1656]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:10:05.364689 ignition[1656]: failed to fetch config: resource requires networking Jan 14 01:10:05.365172 ignition[1656]: Ignition finished successfully Jan 14 01:10:05.402657 ignition[1663]: Ignition 2.24.0 Jan 14 01:10:05.402668 ignition[1663]: Stage: fetch Jan 14 01:10:05.402881 ignition[1663]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:05.402889 ignition[1663]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:10:05.402960 ignition[1663]: parsed url from cmdline: "" Jan 14 01:10:05.402963 ignition[1663]: no config URL provided Jan 14 01:10:05.402968 ignition[1663]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:10:05.403000 ignition[1663]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:10:05.403022 ignition[1663]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 01:10:05.497151 ignition[1663]: GET result: OK Jan 14 01:10:05.497233 ignition[1663]: config has been read from IMDS userdata Jan 14 01:10:05.497262 ignition[1663]: parsing config with SHA512: 0e1664b85318700fe577ea0ffd41a6b2bfada39637d244d06f70a23f5120f517d1a2f47637f14d0045d003c128f017515ca279e267e54f25b4b48099d4759176 Jan 14 01:10:05.503164 unknown[1663]: fetched base config from "system" Jan 14 01:10:05.503174 unknown[1663]: fetched base config from "system" Jan 14 01:10:05.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:05.503507 ignition[1663]: fetch: fetch complete Jan 14 01:10:05.503179 unknown[1663]: fetched user config from "azure" Jan 14 01:10:05.503512 ignition[1663]: fetch: fetch passed Jan 14 01:10:05.505512 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:10:05.503553 ignition[1663]: Ignition finished successfully Jan 14 01:10:05.510363 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:10:05.536449 ignition[1669]: Ignition 2.24.0 Jan 14 01:10:05.536461 ignition[1669]: Stage: kargs Jan 14 01:10:05.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:05.538712 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:10:05.536688 ignition[1669]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:05.544100 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:10:05.536696 ignition[1669]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:10:05.537498 ignition[1669]: kargs: kargs passed Jan 14 01:10:05.537530 ignition[1669]: Ignition finished successfully Jan 14 01:10:05.564992 ignition[1675]: Ignition 2.24.0 Jan 14 01:10:05.564999 ignition[1675]: Stage: disks Jan 14 01:10:05.565235 ignition[1675]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:05.567354 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:10:05.565243 ignition[1675]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:10:05.566186 ignition[1675]: disks: disks passed Jan 14 01:10:05.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:05.566223 ignition[1675]: Ignition finished successfully Jan 14 01:10:05.575447 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:10:05.578466 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:10:05.584421 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:10:05.587364 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:10:05.591558 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:10:05.597549 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:10:05.678261 systemd-fsck[1683]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 14 01:10:05.684202 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:10:05.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:05.688499 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:10:06.056004 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:10:06.056785 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:10:06.057832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:10:06.093165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:10:06.097524 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:10:06.105118 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 01:10:06.109264 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:10:06.109722 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:10:06.118528 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1692) Jan 14 01:10:06.118555 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:06.121019 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:06.122318 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:10:06.125565 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:10:06.133491 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:10:06.133528 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:10:06.133541 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:10:06.135800 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:10:06.693216 coreos-metadata[1694]: Jan 14 01:10:06.692 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 01:10:06.701959 coreos-metadata[1694]: Jan 14 01:10:06.701 INFO Fetch successful Jan 14 01:10:06.701959 coreos-metadata[1694]: Jan 14 01:10:06.701 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 01:10:06.714720 coreos-metadata[1694]: Jan 14 01:10:06.714 INFO Fetch successful Jan 14 01:10:06.732279 coreos-metadata[1694]: Jan 14 01:10:06.732 INFO wrote hostname ci-4578.0.0-p-4dd79cf71d to /sysroot/etc/hostname Jan 14 01:10:06.735427 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:10:06.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:06.740188 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 14 01:10:06.740216 kernel: audit: type=1130 audit(1768353006.737:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:07.806249 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:10:07.815106 kernel: audit: type=1130 audit(1768353007.808:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:07.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:07.814444 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:10:07.833154 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:10:07.859930 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:10:07.862101 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:07.876079 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:10:07.886120 kernel: audit: type=1130 audit(1768353007.877:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:07.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:07.889787 ignition[1797]: INFO : Ignition 2.24.0 Jan 14 01:10:07.889787 ignition[1797]: INFO : Stage: mount Jan 14 01:10:07.891982 ignition[1797]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:07.891982 ignition[1797]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:10:07.891982 ignition[1797]: INFO : mount: mount passed Jan 14 01:10:07.891982 ignition[1797]: INFO : Ignition finished successfully Jan 14 01:10:07.904869 kernel: audit: type=1130 audit(1768353007.894:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:07.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:07.893827 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:10:07.901140 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:10:07.918570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:10:07.942990 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1806) Jan 14 01:10:07.945183 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:10:07.945291 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:10:07.950441 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:10:07.950476 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:10:07.950489 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:10:07.952808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:10:07.983083 ignition[1823]: INFO : Ignition 2.24.0 Jan 14 01:10:07.983083 ignition[1823]: INFO : Stage: files Jan 14 01:10:07.986687 ignition[1823]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:07.986687 ignition[1823]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:10:07.986687 ignition[1823]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:10:07.999634 ignition[1823]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:10:07.999634 ignition[1823]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:10:08.062628 ignition[1823]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:10:08.066092 ignition[1823]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:10:08.066092 ignition[1823]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:10:08.062934 unknown[1823]: wrote ssh authorized keys file for user: core Jan 14 01:10:08.093403 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:10:08.095581 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:10:08.157609 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:10:08.216319 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:10:08.220059 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:10:08.237920 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:10:08.237920 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:10:08.237920 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:10:08.237920 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:10:08.237920 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:10:08.237920 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 01:10:08.691528 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:10:08.905957 ignition[1823]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 01:10:08.905957 ignition[1823]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:10:08.949891 ignition[1823]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:10:08.956634 ignition[1823]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:10:08.956634 ignition[1823]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:10:08.956634 ignition[1823]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:10:08.964632 ignition[1823]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:10:08.964632 ignition[1823]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:10:08.964632 ignition[1823]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:10:08.964632 ignition[1823]: INFO : files: files passed Jan 14 01:10:08.964632 ignition[1823]: INFO : Ignition finished successfully Jan 14 01:10:08.985193 kernel: audit: type=1130 audit(1768353008.964:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:08.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:08.961534 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:10:08.966208 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:10:08.976628 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:10:08.992723 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:10:08.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.006122 initrd-setup-root-after-ignition[1854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:10:09.006122 initrd-setup-root-after-ignition[1854]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:10:09.014067 kernel: audit: type=1130 audit(1768353008.998:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.014092 kernel: audit: type=1131 audit(1768353008.998:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:08.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:08.992819 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:10:09.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.018916 initrd-setup-root-after-ignition[1858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:10:09.023069 kernel: audit: type=1130 audit(1768353009.014:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.013170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:10:09.016292 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:10:09.022198 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:10:09.069346 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:10:09.069464 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:10:09.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.073397 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:10:09.083534 kernel: audit: type=1130 audit(1768353009.072:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.083560 kernel: audit: type=1131 audit(1768353009.072:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.081361 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:10:09.084693 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:10:09.085357 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:10:09.107282 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:10:09.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.110318 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:10:09.127916 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:10:09.128453 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:10:09.133348 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:10:09.135739 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:10:09.139112 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:10:09.139238 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:10:09.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.143298 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:10:09.147140 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:10:09.150266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:10:09.154118 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:10:09.158113 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:10:09.162130 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:10:09.164261 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:10:09.168115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:10:09.172135 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:10:09.176208 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:10:09.180102 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:10:09.183104 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:10:09.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.183245 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:10:09.190058 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:10:09.191307 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:10:09.194049 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:10:09.194411 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:10:09.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.196272 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:10:09.196385 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:10:09.205024 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:10:09.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.205247 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:10:09.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.208884 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:10:09.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.208999 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:10:09.213429 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 01:10:09.213540 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:10:09.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.218602 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:10:09.222994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:10:09.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.223132 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:10:09.229174 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:10:09.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.229920 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:10:09.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.230179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:10:09.233612 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:10:09.233725 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:10:09.240434 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:10:09.240552 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:10:09.260895 ignition[1878]: INFO : Ignition 2.24.0 Jan 14 01:10:09.260895 ignition[1878]: INFO : Stage: umount Jan 14 01:10:09.266361 ignition[1878]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:10:09.266361 ignition[1878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:10:09.266361 ignition[1878]: INFO : umount: umount passed Jan 14 01:10:09.266361 ignition[1878]: INFO : Ignition finished successfully Jan 14 01:10:09.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.265611 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:10:09.265699 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:10:09.275415 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:10:09.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.275868 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:10:09.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.281334 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:10:09.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.281382 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:10:09.287617 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:10:09.288299 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:10:09.291787 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:10:09.292208 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:10:09.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.295957 systemd[1]: Stopped target network.target - Network. Jan 14 01:10:09.299697 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:10:09.299784 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:10:09.307908 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:10:09.311476 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:10:09.311895 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:10:09.320468 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:10:09.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.323019 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:10:09.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.324835 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:10:09.324874 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:10:09.325054 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:10:09.325078 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:10:09.325297 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:10:09.325319 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:10:09.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.332051 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:10:09.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.362000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:10:09.362000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:10:09.332102 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:10:09.336057 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:10:09.336098 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:10:09.339170 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:10:09.342135 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:10:09.343379 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:10:09.351277 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:10:09.351379 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:10:09.358521 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:10:09.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.358604 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:10:09.363555 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:10:09.369129 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:10:09.369169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:10:09.373818 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:10:09.381024 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:10:09.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.381088 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:10:09.385074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:10:09.385127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:10:09.385500 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:10:09.385534 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:10:09.385590 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:10:09.411595 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:10:09.416106 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:10:09.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.418831 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:10:09.424276 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd1808a eth0: Data path switched from VF: enP30832s1 Jan 14 01:10:09.424516 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:10:09.418896 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:10:09.429090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:10:09.429128 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:10:09.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.432058 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:10:09.432109 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:10:09.437448 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:10:09.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.437503 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:10:09.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.438209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:10:09.438251 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:10:09.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.446297 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:10:09.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.448054 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:10:09.448113 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:10:09.450124 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:10:09.450201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:10:09.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.451630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:09.451673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:09.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:09.457071 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:10:09.457164 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:10:09.461528 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:10:09.461613 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:10:09.466338 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:10:09.466420 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:10:09.474286 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:10:09.477039 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:10:09.477097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:10:09.482525 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:10:09.513822 systemd[1]: Switching root. Jan 14 01:10:09.687258 systemd-journald[1029]: Journal stopped Jan 14 01:10:14.112967 systemd-journald[1029]: Received SIGTERM from PID 1 (systemd). Jan 14 01:10:14.113054 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:10:14.113075 kernel: SELinux: policy capability open_perms=1 Jan 14 01:10:14.113089 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:10:14.113101 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:10:14.113113 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:10:14.113127 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:10:14.113137 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:10:14.113149 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:10:14.113160 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:10:14.113172 systemd[1]: Successfully loaded SELinux policy in 192.455ms. Jan 14 01:10:14.113186 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.438ms. Jan 14 01:10:14.113200 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:10:14.113217 systemd[1]: Detected virtualization microsoft. Jan 14 01:10:14.113230 systemd[1]: Detected architecture x86-64. Jan 14 01:10:14.113243 systemd[1]: Detected first boot. Jan 14 01:10:14.113256 systemd[1]: Hostname set to . Jan 14 01:10:14.113272 systemd[1]: Initializing machine ID from random generator. Jan 14 01:10:14.113284 zram_generator::config[1921]: No configuration found. Jan 14 01:10:14.113296 kernel: Guest personality initialized and is inactive Jan 14 01:10:14.113307 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 14 01:10:14.113318 kernel: Initialized host personality Jan 14 01:10:14.113330 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:10:14.113342 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:10:14.113356 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 14 01:10:14.113369 kernel: audit: type=1334 audit(1768353013.675:92): prog-id=12 op=LOAD Jan 14 01:10:14.113380 kernel: audit: type=1334 audit(1768353013.675:93): prog-id=3 op=UNLOAD Jan 14 01:10:14.113393 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:10:14.113407 kernel: audit: type=1334 audit(1768353013.675:94): prog-id=13 op=LOAD Jan 14 01:10:14.113420 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:10:14.113433 kernel: audit: type=1334 audit(1768353013.675:95): prog-id=14 op=LOAD Jan 14 01:10:14.113447 kernel: audit: type=1334 audit(1768353013.675:96): prog-id=4 op=UNLOAD Jan 14 01:10:14.113458 kernel: audit: type=1334 audit(1768353013.675:97): prog-id=5 op=UNLOAD Jan 14 01:10:14.113470 kernel: audit: type=1131 audit(1768353013.676:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.113481 kernel: audit: type=1334 audit(1768353013.692:99): prog-id=12 op=UNLOAD Jan 14 01:10:14.113494 kernel: audit: type=1130 audit(1768353013.702:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.113509 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:10:14.113521 kernel: audit: type=1131 audit(1768353013.702:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.113535 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:10:14.113546 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:10:14.113560 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:10:14.113571 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:10:14.113584 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:10:14.113595 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:10:14.113606 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:10:14.113617 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:10:14.113628 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:10:14.113639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:10:14.113652 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:10:14.113663 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:10:14.113675 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:10:14.113687 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:10:14.113699 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:10:14.113711 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:10:14.113724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:10:14.113735 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:10:14.113746 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:10:14.113756 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:10:14.113767 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:10:14.113777 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:10:14.113788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:10:14.113801 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:10:14.113812 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:10:14.113824 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:10:14.113835 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:10:14.113847 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:10:14.113863 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:10:14.113876 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:10:14.113888 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:10:14.113900 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:10:14.117040 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:10:14.117078 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:10:14.117093 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:10:14.117105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:10:14.117118 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:10:14.117130 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:10:14.117142 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:10:14.117154 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:10:14.117168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:14.117180 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:10:14.117192 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:10:14.117203 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:10:14.117215 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:10:14.117227 systemd[1]: Reached target machines.target - Containers. Jan 14 01:10:14.117238 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:10:14.117252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:14.117264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:10:14.117275 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:10:14.117287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:14.117298 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:10:14.117309 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:14.117325 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:10:14.117337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:14.117349 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:10:14.117360 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:10:14.117371 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:10:14.117383 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:10:14.117394 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:10:14.117408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:14.117419 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:10:14.117431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:10:14.117443 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:10:14.117454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:10:14.117466 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:10:14.117479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:10:14.117492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:14.117504 kernel: fuse: init (API version 7.41) Jan 14 01:10:14.117516 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:10:14.117527 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:10:14.117560 systemd-journald[2004]: Collecting audit messages is enabled. Jan 14 01:10:14.117932 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:10:14.117945 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:10:14.117959 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:10:14.117998 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:10:14.118015 systemd-journald[2004]: Journal started Jan 14 01:10:14.118040 systemd-journald[2004]: Runtime Journal (/run/log/journal/94eed1a565eb4e93b09edf7e7459a949) is 8M, max 158.5M, 150.5M free. Jan 14 01:10:13.820000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:10:14.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.011000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:10:14.011000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:10:14.012000 audit: BPF prog-id=15 op=LOAD Jan 14 01:10:14.012000 audit: BPF prog-id=16 op=LOAD Jan 14 01:10:14.012000 audit: BPF prog-id=17 op=LOAD Jan 14 01:10:14.100000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:10:14.100000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd25a9cc80 a2=4000 a3=0 items=0 ppid=1 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:14.100000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:10:13.666353 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:10:13.676682 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 14 01:10:13.677345 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:10:14.121534 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:10:14.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.123224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:10:14.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.127466 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:10:14.127715 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:10:14.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.130507 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:14.130744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:14.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.134279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:14.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.134480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:14.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.136799 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:10:14.136964 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:10:14.140316 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:14.140476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:14.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.142608 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:10:14.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.145194 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:10:14.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.149034 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:10:14.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.158891 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:10:14.165653 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:10:14.171074 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:10:14.176107 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:10:14.181076 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:10:14.181111 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:10:14.186040 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:10:14.187931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:14.188045 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:14.193156 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:10:14.197148 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:10:14.200088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:10:14.202252 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:10:14.205142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:10:14.211335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:10:14.214151 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:10:14.218029 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:10:14.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.223200 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:10:14.223541 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:10:14.234015 kernel: ACPI: bus type drm_connector registered Jan 14 01:10:14.235156 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:10:14.235497 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:10:14.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.239592 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:10:14.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.241877 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:10:14.247800 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:10:14.254163 systemd-journald[2004]: Time spent on flushing to /var/log/journal/94eed1a565eb4e93b09edf7e7459a949 is 33.865ms for 1123 entries. Jan 14 01:10:14.254163 systemd-journald[2004]: System Journal (/var/log/journal/94eed1a565eb4e93b09edf7e7459a949) is 8M, max 2.2G, 2.2G free. Jan 14 01:10:14.303526 systemd-journald[2004]: Received client request to flush runtime journal. Jan 14 01:10:14.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.277536 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:10:14.283951 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:10:14.289035 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:10:14.305230 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:10:14.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.312242 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 01:10:14.323401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:10:14.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.411511 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:10:14.460359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:10:14.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.463000 audit: BPF prog-id=18 op=LOAD Jan 14 01:10:14.463000 audit: BPF prog-id=19 op=LOAD Jan 14 01:10:14.463000 audit: BPF prog-id=20 op=LOAD Jan 14 01:10:14.464847 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:10:14.467000 audit: BPF prog-id=21 op=LOAD Jan 14 01:10:14.471103 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:10:14.477118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:10:14.488000 audit: BPF prog-id=22 op=LOAD Jan 14 01:10:14.488000 audit: BPF prog-id=23 op=LOAD Jan 14 01:10:14.488000 audit: BPF prog-id=24 op=LOAD Jan 14 01:10:14.491175 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:10:14.492000 audit: BPF prog-id=25 op=LOAD Jan 14 01:10:14.492000 audit: BPF prog-id=26 op=LOAD Jan 14 01:10:14.492000 audit: BPF prog-id=27 op=LOAD Jan 14 01:10:14.493897 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:10:14.536560 systemd-tmpfiles[2082]: ACLs are not supported, ignoring. Jan 14 01:10:14.536576 systemd-tmpfiles[2082]: ACLs are not supported, ignoring. Jan 14 01:10:14.547089 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:10:14.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.574375 systemd-nsresourced[2083]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:10:14.575717 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:10:14.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.632584 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:10:14.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.682759 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:10:14.728936 systemd-oomd[2080]: No swap; memory pressure usage will be degraded Jan 14 01:10:14.729729 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:10:14.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.748354 systemd-resolved[2081]: Positive Trust Anchors: Jan 14 01:10:14.748594 systemd-resolved[2081]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:10:14.748601 systemd-resolved[2081]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:10:14.748641 systemd-resolved[2081]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:10:14.762993 kernel: loop2: detected capacity change from 0 to 229808 Jan 14 01:10:14.827994 kernel: loop3: detected capacity change from 0 to 48592 Jan 14 01:10:14.939049 systemd-resolved[2081]: Using system hostname 'ci-4578.0.0-p-4dd79cf71d'. Jan 14 01:10:14.940667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:10:14.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.945158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:10:14.975299 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:10:14.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:14.978000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:10:14.978000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:10:14.978000 audit: BPF prog-id=28 op=LOAD Jan 14 01:10:14.978000 audit: BPF prog-id=29 op=LOAD Jan 14 01:10:14.980379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:10:15.007497 systemd-udevd[2106]: Using default interface naming scheme 'v257'. Jan 14 01:10:15.228961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:10:15.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.233000 audit: BPF prog-id=30 op=LOAD Jan 14 01:10:15.235772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:10:15.254157 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 01:10:15.301751 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:10:15.343035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#46 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 01:10:15.358009 kernel: hv_vmbus: registering driver hv_balloon Jan 14 01:10:15.363125 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 01:10:15.365128 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 01:10:15.368063 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 01:10:15.370996 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 01:10:15.372023 kernel: Console: switching to colour dummy device 80x25 Jan 14 01:10:15.377501 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 01:10:15.390018 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:10:15.436381 systemd-networkd[2117]: lo: Link UP Jan 14 01:10:15.436632 systemd-networkd[2117]: lo: Gained carrier Jan 14 01:10:15.438535 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:10:15.439135 systemd-networkd[2117]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:10:15.439206 systemd-networkd[2117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:10:15.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.440647 systemd[1]: Reached target network.target - Network. Jan 14 01:10:15.445990 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 14 01:10:15.446117 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:10:15.451277 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:10:15.461009 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:10:15.464040 kernel: hv_netvsc f8615163-0000-1000-2000-6045bdd1808a eth0: Data path switched to VF: enP30832s1 Jan 14 01:10:15.466468 systemd-networkd[2117]: enP30832s1: Link UP Jan 14 01:10:15.466570 systemd-networkd[2117]: eth0: Link UP Jan 14 01:10:15.466574 systemd-networkd[2117]: eth0: Gained carrier Jan 14 01:10:15.466587 systemd-networkd[2117]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:10:15.471309 systemd-networkd[2117]: enP30832s1: Gained carrier Jan 14 01:10:15.479030 systemd-networkd[2117]: eth0: DHCPv4 address 10.200.4.37/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:10:15.515721 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:10:15.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.539252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:15.551728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:15.551961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:15.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.556094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:15.584145 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:10:15.584429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:15.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.589190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:10:15.668013 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 01:10:15.686720 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 14 01:10:15.697613 kernel: loop6: detected capacity change from 0 to 229808 Jan 14 01:10:15.692688 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:10:15.710993 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 01:10:15.718023 kernel: loop7: detected capacity change from 0 to 48592 Jan 14 01:10:15.732074 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 01:10:15.746028 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:10:15.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:15.747938 (sd-merge)[2189]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 14 01:10:15.751585 (sd-merge)[2189]: Merged extensions into '/usr'. Jan 14 01:10:15.755042 systemd[1]: Reload requested from client PID 2060 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:10:15.755056 systemd[1]: Reloading... Jan 14 01:10:15.828026 zram_generator::config[2236]: No configuration found. Jan 14 01:10:16.027652 systemd[1]: Reloading finished in 272 ms. Jan 14 01:10:16.047145 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:10:16.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.049131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:10:16.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.067850 systemd[1]: Starting ensure-sysext.service... Jan 14 01:10:16.072098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:10:16.073000 audit: BPF prog-id=31 op=LOAD Jan 14 01:10:16.074000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:10:16.076000 audit: BPF prog-id=32 op=LOAD Jan 14 01:10:16.076000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:10:16.076000 audit: BPF prog-id=33 op=LOAD Jan 14 01:10:16.076000 audit: BPF prog-id=34 op=LOAD Jan 14 01:10:16.076000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:10:16.076000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:10:16.077000 audit: BPF prog-id=35 op=LOAD Jan 14 01:10:16.077000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:10:16.077000 audit: BPF prog-id=36 op=LOAD Jan 14 01:10:16.077000 audit: BPF prog-id=37 op=LOAD Jan 14 01:10:16.077000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:10:16.077000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:10:16.078000 audit: BPF prog-id=38 op=LOAD Jan 14 01:10:16.090000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:10:16.090000 audit: BPF prog-id=39 op=LOAD Jan 14 01:10:16.090000 audit: BPF prog-id=40 op=LOAD Jan 14 01:10:16.090000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:10:16.090000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:10:16.090000 audit: BPF prog-id=41 op=LOAD Jan 14 01:10:16.090000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:10:16.091000 audit: BPF prog-id=42 op=LOAD Jan 14 01:10:16.091000 audit: BPF prog-id=43 op=LOAD Jan 14 01:10:16.091000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:10:16.091000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:10:16.091000 audit: BPF prog-id=44 op=LOAD Jan 14 01:10:16.091000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:10:16.091000 audit: BPF prog-id=45 op=LOAD Jan 14 01:10:16.091000 audit: BPF prog-id=46 op=LOAD Jan 14 01:10:16.091000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:10:16.091000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:10:16.098845 systemd[1]: Reload requested from client PID 2286 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:10:16.098964 systemd[1]: Reloading... Jan 14 01:10:16.117084 systemd-tmpfiles[2287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:10:16.117113 systemd-tmpfiles[2287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:10:16.117371 systemd-tmpfiles[2287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:10:16.118147 systemd-tmpfiles[2287]: ACLs are not supported, ignoring. Jan 14 01:10:16.118198 systemd-tmpfiles[2287]: ACLs are not supported, ignoring. Jan 14 01:10:16.123871 systemd-tmpfiles[2287]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:10:16.123880 systemd-tmpfiles[2287]: Skipping /boot Jan 14 01:10:16.135822 systemd-tmpfiles[2287]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:10:16.137094 systemd-tmpfiles[2287]: Skipping /boot Jan 14 01:10:16.161033 zram_generator::config[2317]: No configuration found. Jan 14 01:10:16.355913 systemd[1]: Reloading finished in 256 ms. Jan 14 01:10:16.365000 audit: BPF prog-id=47 op=LOAD Jan 14 01:10:16.365000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:10:16.366000 audit: BPF prog-id=48 op=LOAD Jan 14 01:10:16.366000 audit: BPF prog-id=49 op=LOAD Jan 14 01:10:16.366000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:10:16.366000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:10:16.367000 audit: BPF prog-id=50 op=LOAD Jan 14 01:10:16.367000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:10:16.368000 audit: BPF prog-id=51 op=LOAD Jan 14 01:10:16.368000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:10:16.368000 audit: BPF prog-id=52 op=LOAD Jan 14 01:10:16.368000 audit: BPF prog-id=53 op=LOAD Jan 14 01:10:16.368000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:10:16.368000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:10:16.369000 audit: BPF prog-id=54 op=LOAD Jan 14 01:10:16.374000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:10:16.374000 audit: BPF prog-id=55 op=LOAD Jan 14 01:10:16.374000 audit: BPF prog-id=56 op=LOAD Jan 14 01:10:16.374000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:10:16.374000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:10:16.375000 audit: BPF prog-id=57 op=LOAD Jan 14 01:10:16.375000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:10:16.375000 audit: BPF prog-id=58 op=LOAD Jan 14 01:10:16.375000 audit: BPF prog-id=59 op=LOAD Jan 14 01:10:16.375000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:10:16.375000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:10:16.375000 audit: BPF prog-id=60 op=LOAD Jan 14 01:10:16.375000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:10:16.375000 audit: BPF prog-id=61 op=LOAD Jan 14 01:10:16.375000 audit: BPF prog-id=62 op=LOAD Jan 14 01:10:16.375000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:10:16.375000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:10:16.379196 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:10:16.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.389872 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:10:16.394259 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:10:16.399197 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:10:16.404211 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:10:16.411250 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:10:16.417626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:16.417799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:16.420218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:16.425192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:16.428000 audit[2386]: SYSTEM_BOOT pid=2386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.433677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:16.435179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:16.436174 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:16.436288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:16.436391 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:16.441771 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:16.441945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:16.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.445507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:16.445682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:16.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.450436 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:16.450597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:16.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.457173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:16.457356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:16.458356 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:16.462422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:16.467647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:16.469281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:16.469481 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:16.469575 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:16.469664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:16.474014 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:10:16.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.479867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:16.483856 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:16.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.492705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:16.492863 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:16.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.496380 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:16.496540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:16.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.504083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:16.504336 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:10:16.505282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:10:16.508190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:10:16.511318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:10:16.516204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:10:16.517879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:10:16.518084 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:10:16.518194 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:10:16.518361 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:10:16.520269 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:10:16.525090 systemd[1]: Finished ensure-sysext.service. Jan 14 01:10:16.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.528385 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:10:16.530242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:10:16.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.536105 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:10:16.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.537903 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:10:16.538123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:10:16.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.542251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:10:16.542414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:10:16.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.545225 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:10:16.545378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:10:16.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:16.548909 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:10:16.548949 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:10:16.780000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:10:16.780000 audit[2428]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd9cbf8a0 a2=420 a3=0 items=0 ppid=2381 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:16.780000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:10:16.781637 augenrules[2428]: No rules Jan 14 01:10:16.782076 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:10:16.782339 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:10:16.934119 systemd-networkd[2117]: eth0: Gained IPv6LL Jan 14 01:10:16.936140 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:10:16.939320 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:10:17.366690 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:10:17.371272 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:10:21.619760 ldconfig[2383]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:10:21.633829 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:10:21.639212 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:10:21.661861 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:10:21.663533 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:10:21.666141 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:10:21.667810 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:10:21.671026 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:10:21.672868 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:10:21.674316 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:10:21.676163 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:10:21.677860 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:10:21.681028 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:10:21.684039 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:10:21.684073 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:10:21.685130 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:10:21.686962 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:10:21.689576 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:10:21.693651 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:10:21.695555 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:10:21.697375 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:10:21.709422 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:10:21.712278 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:10:21.715549 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:10:21.718730 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:10:21.720174 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:10:21.723060 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:10:21.723087 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:10:21.724671 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 01:10:21.728077 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:10:21.735516 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:10:21.740868 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:10:21.744145 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:10:21.751542 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:10:21.755230 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:10:21.758077 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:10:21.760206 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:10:21.762598 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 14 01:10:21.768154 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 01:10:21.770408 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 01:10:21.775112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:10:21.780195 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:10:21.787338 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:10:21.790616 jq[2446]: false Jan 14 01:10:21.796073 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:10:21.796761 KVP[2452]: KVP starting; pid is:2452 Jan 14 01:10:21.800192 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:10:21.814957 kernel: hv_utils: KVP IC version 4.0 Jan 14 01:10:21.807512 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:10:21.807232 KVP[2452]: KVP LIC Version: 3.1 Jan 14 01:10:21.813794 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:10:21.818091 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:10:21.819069 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:10:21.822421 google_oslogin_nss_cache[2451]: oslogin_cache_refresh[2451]: Refreshing passwd entry cache Jan 14 01:10:21.824019 oslogin_cache_refresh[2451]: Refreshing passwd entry cache Jan 14 01:10:21.828862 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:10:21.829459 extend-filesystems[2450]: Found /dev/nvme0n1p6 Jan 14 01:10:21.836204 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:10:21.843071 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:10:21.846358 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:10:21.846591 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:10:21.855534 google_oslogin_nss_cache[2451]: oslogin_cache_refresh[2451]: Failure getting users, quitting Jan 14 01:10:21.855534 google_oslogin_nss_cache[2451]: oslogin_cache_refresh[2451]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:10:21.855534 google_oslogin_nss_cache[2451]: oslogin_cache_refresh[2451]: Refreshing group entry cache Jan 14 01:10:21.855634 extend-filesystems[2450]: Found /dev/nvme0n1p9 Jan 14 01:10:21.855139 oslogin_cache_refresh[2451]: Failure getting users, quitting Jan 14 01:10:21.855155 oslogin_cache_refresh[2451]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:10:21.855201 oslogin_cache_refresh[2451]: Refreshing group entry cache Jan 14 01:10:21.862774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:10:21.867807 chronyd[2441]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 14 01:10:21.870412 google_oslogin_nss_cache[2451]: oslogin_cache_refresh[2451]: Failure getting groups, quitting Jan 14 01:10:21.870412 google_oslogin_nss_cache[2451]: oslogin_cache_refresh[2451]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:10:21.870465 extend-filesystems[2450]: Checking size of /dev/nvme0n1p9 Jan 14 01:10:21.869342 oslogin_cache_refresh[2451]: Failure getting groups, quitting Jan 14 01:10:21.872964 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:10:21.869351 oslogin_cache_refresh[2451]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:10:21.873889 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:10:21.874125 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:10:21.877514 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:10:21.882502 jq[2468]: true Jan 14 01:10:21.877812 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:10:21.884569 chronyd[2441]: Timezone right/UTC failed leap second check, ignoring Jan 14 01:10:21.884713 chronyd[2441]: Loaded seccomp filter (level 2) Jan 14 01:10:21.890297 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 01:10:21.909026 extend-filesystems[2450]: Resized partition /dev/nvme0n1p9 Jan 14 01:10:21.911214 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:10:21.927214 jq[2496]: true Jan 14 01:10:21.932994 extend-filesystems[2507]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:10:21.949994 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 14 01:10:21.954995 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 14 01:10:21.993465 tar[2476]: linux-amd64/LICENSE Jan 14 01:10:21.993667 update_engine[2464]: I20260114 01:10:21.964198 2464 main.cc:92] Flatcar Update Engine starting Jan 14 01:10:21.996573 tar[2476]: linux-amd64/helm Jan 14 01:10:22.008767 extend-filesystems[2507]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 14 01:10:22.008767 extend-filesystems[2507]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 14 01:10:22.008767 extend-filesystems[2507]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 14 01:10:22.033076 extend-filesystems[2450]: Resized filesystem in /dev/nvme0n1p9 Jan 14 01:10:22.040415 update_engine[2464]: I20260114 01:10:22.032140 2464 update_check_scheduler.cc:74] Next update check in 9m54s Jan 14 01:10:22.009901 dbus-daemon[2444]: [system] SELinux support is enabled Jan 14 01:10:22.013151 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:10:22.024340 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:10:22.024570 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:10:22.036184 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:10:22.036223 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:10:22.041052 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:10:22.041071 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:10:22.051657 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:10:22.052867 systemd-logind[2463]: New seat seat0. Jan 14 01:10:22.053656 bash[2527]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:10:22.056322 systemd-logind[2463]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 14 01:10:22.068306 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:10:22.070873 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:10:22.075233 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:10:22.079424 coreos-metadata[2443]: Jan 14 01:10:22.079 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 01:10:22.081150 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 01:10:22.083245 coreos-metadata[2443]: Jan 14 01:10:22.083 INFO Fetch successful Jan 14 01:10:22.083464 coreos-metadata[2443]: Jan 14 01:10:22.083 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 01:10:22.089168 coreos-metadata[2443]: Jan 14 01:10:22.089 INFO Fetch successful Jan 14 01:10:22.089265 coreos-metadata[2443]: Jan 14 01:10:22.089 INFO Fetching http://168.63.129.16/machine/842952c1-c7d3-4396-8201-8d99b45bc4fe/567bbe88%2D0f34%2D4026%2Da632%2D24112655b9d7.%5Fci%2D4578.0.0%2Dp%2D4dd79cf71d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 01:10:22.090656 coreos-metadata[2443]: Jan 14 01:10:22.090 INFO Fetch successful Jan 14 01:10:22.090743 coreos-metadata[2443]: Jan 14 01:10:22.090 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 01:10:22.098491 coreos-metadata[2443]: Jan 14 01:10:22.098 INFO Fetch successful Jan 14 01:10:22.175909 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:10:22.182155 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:10:22.374178 locksmithd[2545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:10:22.615011 sshd_keygen[2501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:10:22.653233 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:10:22.659888 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:10:22.665492 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 01:10:22.698682 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:10:22.698931 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:10:22.705434 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:10:22.726458 tar[2476]: linux-amd64/README.md Jan 14 01:10:22.731915 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 01:10:22.742077 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:10:22.744622 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:10:22.749811 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:10:22.757758 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:10:22.760511 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:10:22.997601 containerd[2498]: time="2026-01-14T01:10:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:10:22.998082 containerd[2498]: time="2026-01-14T01:10:22.998038477Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009411556Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.933µs" Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009447660Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009491498Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009504116Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009659008Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009682280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009746750Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009767815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.009995044Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.010008284Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.010018335Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:10:23.010764 containerd[2498]: time="2026-01-14T01:10:23.010026739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010149363Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010163734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010229611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010375876Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010397768Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010408154Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010433981Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010627959Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:10:23.011034 containerd[2498]: time="2026-01-14T01:10:23.010670104Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:10:23.049063 containerd[2498]: time="2026-01-14T01:10:23.049027749Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:10:23.049217 containerd[2498]: time="2026-01-14T01:10:23.049202015Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:10:23.199034 containerd[2498]: time="2026-01-14T01:10:23.198967937Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:10:23.199232 containerd[2498]: time="2026-01-14T01:10:23.199217632Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:10:23.199309 containerd[2498]: time="2026-01-14T01:10:23.199297580Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:10:23.199353 containerd[2498]: time="2026-01-14T01:10:23.199344926Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:10:23.199393 containerd[2498]: time="2026-01-14T01:10:23.199386205Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:10:23.199432 containerd[2498]: time="2026-01-14T01:10:23.199424916Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:10:23.199471 containerd[2498]: time="2026-01-14T01:10:23.199464749Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:10:23.199507 containerd[2498]: time="2026-01-14T01:10:23.199501447Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:10:23.199914 containerd[2498]: time="2026-01-14T01:10:23.199895689Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:10:23.200001 containerd[2498]: time="2026-01-14T01:10:23.199990500Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:10:23.200047 containerd[2498]: time="2026-01-14T01:10:23.200039072Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:10:23.200090 containerd[2498]: time="2026-01-14T01:10:23.200082040Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200782440Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200823014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200840222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200851969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200863123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200875329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200889620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200902127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200913994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200930778Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200941375Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.200982117Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.201033232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:10:23.201069 containerd[2498]: time="2026-01-14T01:10:23.201047314Z" level=info msg="Start snapshots syncer" Jan 14 01:10:23.201398 containerd[2498]: time="2026-01-14T01:10:23.201094537Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:10:23.201608 containerd[2498]: time="2026-01-14T01:10:23.201563621Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:10:23.201744 containerd[2498]: time="2026-01-14T01:10:23.201628191Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:10:23.201744 containerd[2498]: time="2026-01-14T01:10:23.201680241Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:10:23.201797 containerd[2498]: time="2026-01-14T01:10:23.201783199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:10:23.201824 containerd[2498]: time="2026-01-14T01:10:23.201804060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:10:23.201824 containerd[2498]: time="2026-01-14T01:10:23.201816474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:10:23.201864 containerd[2498]: time="2026-01-14T01:10:23.201827566Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:10:23.201864 containerd[2498]: time="2026-01-14T01:10:23.201840728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:10:23.201864 containerd[2498]: time="2026-01-14T01:10:23.201852906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:10:23.201921 containerd[2498]: time="2026-01-14T01:10:23.201863727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:10:23.201921 containerd[2498]: time="2026-01-14T01:10:23.201874472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:10:23.201921 containerd[2498]: time="2026-01-14T01:10:23.201885804Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:10:23.201997 containerd[2498]: time="2026-01-14T01:10:23.201922963Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:10:23.201997 containerd[2498]: time="2026-01-14T01:10:23.201938329Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:10:23.201997 containerd[2498]: time="2026-01-14T01:10:23.201948554Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:10:23.201997 containerd[2498]: time="2026-01-14T01:10:23.201958334Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:10:23.202088 containerd[2498]: time="2026-01-14T01:10:23.201967163Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:10:23.202088 containerd[2498]: time="2026-01-14T01:10:23.202024282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:10:23.202088 containerd[2498]: time="2026-01-14T01:10:23.202034634Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:10:23.202088 containerd[2498]: time="2026-01-14T01:10:23.202049366Z" level=info msg="runtime interface created" Jan 14 01:10:23.202088 containerd[2498]: time="2026-01-14T01:10:23.202055093Z" level=info msg="created NRI interface" Jan 14 01:10:23.202184 containerd[2498]: time="2026-01-14T01:10:23.202112748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:10:23.202184 containerd[2498]: time="2026-01-14T01:10:23.202126268Z" level=info msg="Connect containerd service" Jan 14 01:10:23.202184 containerd[2498]: time="2026-01-14T01:10:23.202154666Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:10:23.203156 containerd[2498]: time="2026-01-14T01:10:23.202986683Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:10:23.359861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:10:23.370328 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:10:23.636421 containerd[2498]: time="2026-01-14T01:10:23.636296013Z" level=info msg="Start subscribing containerd event" Jan 14 01:10:23.636778 containerd[2498]: time="2026-01-14T01:10:23.636572060Z" level=info msg="Start recovering state" Jan 14 01:10:23.636941 containerd[2498]: time="2026-01-14T01:10:23.636929420Z" level=info msg="Start event monitor" Jan 14 01:10:23.637103 containerd[2498]: time="2026-01-14T01:10:23.637030440Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:10:23.637103 containerd[2498]: time="2026-01-14T01:10:23.637040430Z" level=info msg="Start streaming server" Jan 14 01:10:23.637103 containerd[2498]: time="2026-01-14T01:10:23.637049155Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:10:23.637103 containerd[2498]: time="2026-01-14T01:10:23.637058773Z" level=info msg="runtime interface starting up..." Jan 14 01:10:23.637383 containerd[2498]: time="2026-01-14T01:10:23.637307874Z" level=info msg="starting plugins..." Jan 14 01:10:23.637383 containerd[2498]: time="2026-01-14T01:10:23.637327901Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:10:23.637534 containerd[2498]: time="2026-01-14T01:10:23.637522590Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:10:23.637758 containerd[2498]: time="2026-01-14T01:10:23.637715384Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:10:23.637883 containerd[2498]: time="2026-01-14T01:10:23.637827328Z" level=info msg="containerd successfully booted in 0.641252s" Jan 14 01:10:23.638159 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:10:23.652823 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:10:23.655202 systemd[1]: Startup finished in 4.932s (kernel) + 10.406s (initrd) + 12.787s (userspace) = 28.126s. Jan 14 01:10:23.958263 kubelet[2611]: E0114 01:10:23.958158 2611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:10:23.961070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:10:23.961232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:10:23.962456 systemd[1]: kubelet.service: Consumed 994ms CPU time, 268.3M memory peak. Jan 14 01:10:24.281164 login[2595]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:24.281945 login[2596]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.283679Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.284292Z INFO Daemon Daemon OS: flatcar 4578.0.0 Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.284602Z INFO Daemon Daemon Python: 3.12.11 Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.285370Z INFO Daemon Daemon Run daemon Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.285680Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4578.0.0' Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.285926Z INFO Daemon Daemon Using waagent for provisioning Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.286124Z INFO Daemon Daemon Activate resource disk Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.286355Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.288290Z INFO Daemon Daemon Found device: None Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.288569Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.288870Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.289775Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 01:10:24.297672 waagent[2592]: 2026-01-14T01:10:24.289907Z INFO Daemon Daemon Running default provisioning handler Jan 14 01:10:24.308332 waagent[2592]: 2026-01-14T01:10:24.307400Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 01:10:24.313159 waagent[2592]: 2026-01-14T01:10:24.313112Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 01:10:24.315669 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:10:24.317454 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:10:24.319124 waagent[2592]: 2026-01-14T01:10:24.319072Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 01:10:24.320559 systemd-logind[2463]: New session 2 of user core. Jan 14 01:10:24.321576 waagent[2592]: 2026-01-14T01:10:24.321357Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 01:10:24.325841 systemd-logind[2463]: New session 1 of user core. Jan 14 01:10:24.353156 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:10:24.355390 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:10:24.368697 (systemd)[2638]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:24.370839 systemd-logind[2463]: New session 3 of user core. Jan 14 01:10:24.408998 waagent[2592]: 2026-01-14T01:10:24.407088Z INFO Daemon Daemon Successfully mounted dvd Jan 14 01:10:24.441464 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 01:10:24.445294 waagent[2592]: 2026-01-14T01:10:24.445248Z INFO Daemon Daemon Detect protocol endpoint Jan 14 01:10:24.446612 waagent[2592]: 2026-01-14T01:10:24.445990Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 01:10:24.446612 waagent[2592]: 2026-01-14T01:10:24.446347Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 01:10:24.446734 waagent[2592]: 2026-01-14T01:10:24.446713Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 01:10:24.447129 waagent[2592]: 2026-01-14T01:10:24.447105Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 01:10:24.447579 waagent[2592]: 2026-01-14T01:10:24.447558Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 01:10:24.489010 waagent[2592]: 2026-01-14T01:10:24.488718Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 01:10:24.491577 waagent[2592]: 2026-01-14T01:10:24.491553Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 01:10:24.493655 waagent[2592]: 2026-01-14T01:10:24.492712Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 01:10:24.525239 systemd[2638]: Queued start job for default target default.target. Jan 14 01:10:24.533764 systemd[2638]: Created slice app.slice - User Application Slice. Jan 14 01:10:24.533797 systemd[2638]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:10:24.533812 systemd[2638]: Reached target paths.target - Paths. Jan 14 01:10:24.533937 systemd[2638]: Reached target timers.target - Timers. Jan 14 01:10:24.534806 systemd[2638]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:10:24.535493 systemd[2638]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:10:24.556911 systemd[2638]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:10:24.559524 systemd[2638]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:10:24.560127 systemd[2638]: Reached target sockets.target - Sockets. Jan 14 01:10:24.560329 systemd[2638]: Reached target basic.target - Basic System. Jan 14 01:10:24.560496 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:10:24.560835 systemd[2638]: Reached target default.target - Main User Target. Jan 14 01:10:24.562316 systemd[2638]: Startup finished in 187ms. Jan 14 01:10:24.565155 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:10:24.565833 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 01:10:24.653007 waagent[2592]: 2026-01-14T01:10:24.652906Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 01:10:24.654907 waagent[2592]: 2026-01-14T01:10:24.654818Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 01:10:24.662882 waagent[2592]: 2026-01-14T01:10:24.662840Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 01:10:24.686714 waagent[2592]: 2026-01-14T01:10:24.686671Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Jan 14 01:10:24.688424 waagent[2592]: 2026-01-14T01:10:24.688381Z INFO Daemon Jan 14 01:10:24.689210 waagent[2592]: 2026-01-14T01:10:24.689130Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f01ba0a9-88a8-4e77-8995-172dd79f9202 eTag: 5311908443125423258 source: Fabric] Jan 14 01:10:24.692173 waagent[2592]: 2026-01-14T01:10:24.692134Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 01:10:24.693821 waagent[2592]: 2026-01-14T01:10:24.693786Z INFO Daemon Jan 14 01:10:24.694706 waagent[2592]: 2026-01-14T01:10:24.694637Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 01:10:24.699900 waagent[2592]: 2026-01-14T01:10:24.699867Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 01:10:24.765141 waagent[2592]: 2026-01-14T01:10:24.765095Z INFO Daemon Downloaded certificate {'thumbprint': 'F11CF3AFFF91E2BDF8574E92BC87D34D69537E41', 'hasPrivateKey': True} Jan 14 01:10:24.767589 waagent[2592]: 2026-01-14T01:10:24.767549Z INFO Daemon Fetch goal state completed Jan 14 01:10:24.777396 waagent[2592]: 2026-01-14T01:10:24.777363Z INFO Daemon Daemon Starting provisioning Jan 14 01:10:24.778049 waagent[2592]: 2026-01-14T01:10:24.777858Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 01:10:24.778159 waagent[2592]: 2026-01-14T01:10:24.778134Z INFO Daemon Daemon Set hostname [ci-4578.0.0-p-4dd79cf71d] Jan 14 01:10:24.825441 waagent[2592]: 2026-01-14T01:10:24.825396Z INFO Daemon Daemon Publish hostname [ci-4578.0.0-p-4dd79cf71d] Jan 14 01:10:24.831294 waagent[2592]: 2026-01-14T01:10:24.826134Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 01:10:24.831294 waagent[2592]: 2026-01-14T01:10:24.826469Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 01:10:24.833864 systemd-networkd[2117]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:10:24.833872 systemd-networkd[2117]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:10:24.833940 systemd-networkd[2117]: eth0: DHCP lease lost Jan 14 01:10:24.851283 waagent[2592]: 2026-01-14T01:10:24.851239Z INFO Daemon Daemon Create user account if not exists Jan 14 01:10:24.854811 waagent[2592]: 2026-01-14T01:10:24.851818Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 01:10:24.854811 waagent[2592]: 2026-01-14T01:10:24.852062Z INFO Daemon Daemon Configure sudoer Jan 14 01:10:24.856205 waagent[2592]: 2026-01-14T01:10:24.856164Z INFO Daemon Daemon Configure sshd Jan 14 01:10:24.862038 waagent[2592]: 2026-01-14T01:10:24.861990Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 01:10:24.867634 waagent[2592]: 2026-01-14T01:10:24.862558Z INFO Daemon Daemon Deploy ssh public key. Jan 14 01:10:24.872046 systemd-networkd[2117]: eth0: DHCPv4 address 10.200.4.37/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:10:25.935522 waagent[2592]: 2026-01-14T01:10:25.935468Z INFO Daemon Daemon Provisioning complete Jan 14 01:10:25.945408 waagent[2592]: 2026-01-14T01:10:25.945370Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 01:10:25.947050 waagent[2592]: 2026-01-14T01:10:25.947013Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 01:10:25.949317 waagent[2592]: 2026-01-14T01:10:25.949283Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 14 01:10:26.066232 waagent[2684]: 2026-01-14T01:10:26.066154Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 14 01:10:26.066552 waagent[2684]: 2026-01-14T01:10:26.066281Z INFO ExtHandler ExtHandler OS: flatcar 4578.0.0 Jan 14 01:10:26.066552 waagent[2684]: 2026-01-14T01:10:26.066335Z INFO ExtHandler ExtHandler Python: 3.12.11 Jan 14 01:10:26.066552 waagent[2684]: 2026-01-14T01:10:26.066377Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 14 01:10:26.106467 waagent[2684]: 2026-01-14T01:10:26.106407Z INFO ExtHandler ExtHandler Distro: flatcar-4578.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.12.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 14 01:10:26.106622 waagent[2684]: 2026-01-14T01:10:26.106591Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:10:26.106692 waagent[2684]: 2026-01-14T01:10:26.106657Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:10:26.117393 waagent[2684]: 2026-01-14T01:10:26.117333Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 01:10:26.122295 waagent[2684]: 2026-01-14T01:10:26.122263Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Jan 14 01:10:26.122667 waagent[2684]: 2026-01-14T01:10:26.122629Z INFO ExtHandler Jan 14 01:10:26.122722 waagent[2684]: 2026-01-14T01:10:26.122700Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d7ed4c42-4933-472f-bfa6-341132a09f21 eTag: 5311908443125423258 source: Fabric] Jan 14 01:10:26.122940 waagent[2684]: 2026-01-14T01:10:26.122914Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 01:10:26.123342 waagent[2684]: 2026-01-14T01:10:26.123312Z INFO ExtHandler Jan 14 01:10:26.123380 waagent[2684]: 2026-01-14T01:10:26.123366Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 01:10:26.133818 waagent[2684]: 2026-01-14T01:10:26.133791Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 01:10:26.192466 waagent[2684]: 2026-01-14T01:10:26.192376Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F11CF3AFFF91E2BDF8574E92BC87D34D69537E41', 'hasPrivateKey': True} Jan 14 01:10:26.192790 waagent[2684]: 2026-01-14T01:10:26.192757Z INFO ExtHandler Fetch goal state completed Jan 14 01:10:26.207906 waagent[2684]: 2026-01-14T01:10:26.207858Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 14 01:10:26.212223 waagent[2684]: 2026-01-14T01:10:26.212180Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2684 Jan 14 01:10:26.212345 waagent[2684]: 2026-01-14T01:10:26.212319Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 01:10:26.212610 waagent[2684]: 2026-01-14T01:10:26.212585Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 14 01:10:26.213753 waagent[2684]: 2026-01-14T01:10:26.213715Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 01:10:26.214128 waagent[2684]: 2026-01-14T01:10:26.214093Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 14 01:10:26.214251 waagent[2684]: 2026-01-14T01:10:26.214222Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 14 01:10:26.214699 waagent[2684]: 2026-01-14T01:10:26.214664Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 01:10:26.263233 waagent[2684]: 2026-01-14T01:10:26.263197Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 01:10:26.263393 waagent[2684]: 2026-01-14T01:10:26.263369Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 01:10:26.268993 waagent[2684]: 2026-01-14T01:10:26.268908Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 01:10:26.274287 systemd[1]: Reload requested from client PID 2699 ('systemctl') (unit waagent.service)... Jan 14 01:10:26.274300 systemd[1]: Reloading... Jan 14 01:10:26.351054 zram_generator::config[2740]: No configuration found. Jan 14 01:10:26.472999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#41 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 14 01:10:26.544023 systemd[1]: Reloading finished in 269 ms. Jan 14 01:10:26.556795 waagent[2684]: 2026-01-14T01:10:26.555957Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 01:10:26.556795 waagent[2684]: 2026-01-14T01:10:26.556147Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 01:10:27.091097 waagent[2684]: 2026-01-14T01:10:27.091021Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 01:10:27.091415 waagent[2684]: 2026-01-14T01:10:27.091383Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 14 01:10:27.092216 waagent[2684]: 2026-01-14T01:10:27.092106Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 01:10:27.092216 waagent[2684]: 2026-01-14T01:10:27.092173Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:10:27.092496 waagent[2684]: 2026-01-14T01:10:27.092466Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 01:10:27.092618 waagent[2684]: 2026-01-14T01:10:27.092575Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:10:27.092841 waagent[2684]: 2026-01-14T01:10:27.092816Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 01:10:27.093107 waagent[2684]: 2026-01-14T01:10:27.093083Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:10:27.093179 waagent[2684]: 2026-01-14T01:10:27.093153Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 01:10:27.093179 waagent[2684]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 01:10:27.093179 waagent[2684]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 01:10:27.093179 waagent[2684]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 01:10:27.093179 waagent[2684]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:10:27.093179 waagent[2684]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:10:27.093179 waagent[2684]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:10:27.093639 waagent[2684]: 2026-01-14T01:10:27.093475Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:10:27.093639 waagent[2684]: 2026-01-14T01:10:27.093607Z INFO EnvHandler ExtHandler Configure routes Jan 14 01:10:27.093856 waagent[2684]: 2026-01-14T01:10:27.093829Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 01:10:27.093900 waagent[2684]: 2026-01-14T01:10:27.093874Z INFO EnvHandler ExtHandler Gateway:None Jan 14 01:10:27.093959 waagent[2684]: 2026-01-14T01:10:27.093939Z INFO EnvHandler ExtHandler Routes:None Jan 14 01:10:27.094243 waagent[2684]: 2026-01-14T01:10:27.094219Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 01:10:27.094710 waagent[2684]: 2026-01-14T01:10:27.094682Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 01:10:27.094799 waagent[2684]: 2026-01-14T01:10:27.094765Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 01:10:27.094920 waagent[2684]: 2026-01-14T01:10:27.094897Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 01:10:27.103505 waagent[2684]: 2026-01-14T01:10:27.103472Z INFO ExtHandler ExtHandler Jan 14 01:10:27.103637 waagent[2684]: 2026-01-14T01:10:27.103621Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: de48903e-08d7-4db4-87df-ecc5cc8bce23 correlation b90b04bb-c588-4148-908f-be507dd42f44 created: 2026-01-14T01:09:33.449371Z] Jan 14 01:10:27.103898 waagent[2684]: 2026-01-14T01:10:27.103883Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 01:10:27.104375 waagent[2684]: 2026-01-14T01:10:27.104354Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 14 01:10:27.140143 waagent[2684]: 2026-01-14T01:10:27.140096Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 14 01:10:27.140143 waagent[2684]: Try `iptables -h' or 'iptables --help' for more information.) Jan 14 01:10:27.140458 waagent[2684]: 2026-01-14T01:10:27.140426Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2306F243-B9CF-4E2B-B3A2-99A447F045C4;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 14 01:10:27.154784 waagent[2684]: 2026-01-14T01:10:27.154737Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 01:10:27.154784 waagent[2684]: Executing ['ip', '-a', '-o', 'link']: Jan 14 01:10:27.154784 waagent[2684]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 01:10:27.154784 waagent[2684]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d1:80:8a brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx6045bdd1808a Jan 14 01:10:27.154784 waagent[2684]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:d1:80:8a brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 14 01:10:27.154784 waagent[2684]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 01:10:27.154784 waagent[2684]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 01:10:27.154784 waagent[2684]: 2: eth0 inet 10.200.4.37/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 01:10:27.154784 waagent[2684]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 01:10:27.154784 waagent[2684]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 01:10:27.154784 waagent[2684]: 2: eth0 inet6 fe80::6245:bdff:fed1:808a/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 01:10:27.196506 waagent[2684]: 2026-01-14T01:10:27.196461Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 14 01:10:27.196506 waagent[2684]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:10:27.196506 waagent[2684]: pkts bytes target prot opt in out source destination Jan 14 01:10:27.196506 waagent[2684]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:10:27.196506 waagent[2684]: pkts bytes target prot opt in out source destination Jan 14 01:10:27.196506 waagent[2684]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jan 14 01:10:27.196506 waagent[2684]: pkts bytes target prot opt in out source destination Jan 14 01:10:27.196506 waagent[2684]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 01:10:27.196506 waagent[2684]: 3 535 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 01:10:27.196506 waagent[2684]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 01:10:27.199260 waagent[2684]: 2026-01-14T01:10:27.199212Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 01:10:27.199260 waagent[2684]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:10:27.199260 waagent[2684]: pkts bytes target prot opt in out source destination Jan 14 01:10:27.199260 waagent[2684]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:10:27.199260 waagent[2684]: pkts bytes target prot opt in out source destination Jan 14 01:10:27.199260 waagent[2684]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jan 14 01:10:27.199260 waagent[2684]: pkts bytes target prot opt in out source destination Jan 14 01:10:27.199260 waagent[2684]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 01:10:27.199260 waagent[2684]: 4 587 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 01:10:27.199260 waagent[2684]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 01:10:34.212259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:10:34.213826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:10:34.705213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:10:34.709027 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:10:34.744107 kubelet[2839]: E0114 01:10:34.744072 2839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:10:34.747244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:10:34.747347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:10:34.747669 systemd[1]: kubelet.service: Consumed 133ms CPU time, 108.2M memory peak. Jan 14 01:10:44.998057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:10:44.999613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:10:45.544134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:10:45.554272 (kubelet)[2855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:10:45.589562 kubelet[2855]: E0114 01:10:45.589530 2855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:10:45.591342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:10:45.591479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:10:45.591798 systemd[1]: kubelet.service: Consumed 130ms CPU time, 108.6M memory peak. Jan 14 01:10:45.668917 chronyd[2441]: Selected source PHC0 Jan 14 01:10:49.719399 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:10:49.720749 systemd[1]: Started sshd@0-10.200.4.37:22-10.200.16.10:50164.service - OpenSSH per-connection server daemon (10.200.16.10:50164). Jan 14 01:10:50.511625 sshd[2863]: Accepted publickey for core from 10.200.16.10 port 50164 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:10:50.512831 sshd-session[2863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:50.516640 systemd-logind[2463]: New session 4 of user core. Jan 14 01:10:50.524130 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:10:50.923693 systemd[1]: Started sshd@1-10.200.4.37:22-10.200.16.10:50178.service - OpenSSH per-connection server daemon (10.200.16.10:50178). Jan 14 01:10:51.464988 sshd[2870]: Accepted publickey for core from 10.200.16.10 port 50178 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:10:51.466226 sshd-session[2870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:51.470691 systemd-logind[2463]: New session 5 of user core. Jan 14 01:10:51.479152 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:10:51.768278 sshd[2874]: Connection closed by 10.200.16.10 port 50178 Jan 14 01:10:51.768794 sshd-session[2870]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:51.772695 systemd-logind[2463]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:10:51.773291 systemd[1]: sshd@1-10.200.4.37:22-10.200.16.10:50178.service: Deactivated successfully. Jan 14 01:10:51.775047 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:10:51.776504 systemd-logind[2463]: Removed session 5. Jan 14 01:10:51.890925 systemd[1]: Started sshd@2-10.200.4.37:22-10.200.16.10:50182.service - OpenSSH per-connection server daemon (10.200.16.10:50182). Jan 14 01:10:52.433344 sshd[2880]: Accepted publickey for core from 10.200.16.10 port 50182 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:10:52.434481 sshd-session[2880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:52.439153 systemd-logind[2463]: New session 6 of user core. Jan 14 01:10:52.445137 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:10:52.733641 sshd[2884]: Connection closed by 10.200.16.10 port 50182 Jan 14 01:10:52.734410 sshd-session[2880]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:52.737955 systemd[1]: sshd@2-10.200.4.37:22-10.200.16.10:50182.service: Deactivated successfully. Jan 14 01:10:52.739700 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:10:52.740545 systemd-logind[2463]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:10:52.741803 systemd-logind[2463]: Removed session 6. Jan 14 01:10:52.846834 systemd[1]: Started sshd@3-10.200.4.37:22-10.200.16.10:50196.service - OpenSSH per-connection server daemon (10.200.16.10:50196). Jan 14 01:10:53.387017 sshd[2890]: Accepted publickey for core from 10.200.16.10 port 50196 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:10:53.387605 sshd-session[2890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:53.392140 systemd-logind[2463]: New session 7 of user core. Jan 14 01:10:53.398150 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:10:53.692440 sshd[2894]: Connection closed by 10.200.16.10 port 50196 Jan 14 01:10:53.692944 sshd-session[2890]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:53.696555 systemd[1]: sshd@3-10.200.4.37:22-10.200.16.10:50196.service: Deactivated successfully. Jan 14 01:10:53.698234 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:10:53.698939 systemd-logind[2463]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:10:53.700393 systemd-logind[2463]: Removed session 7. Jan 14 01:10:53.803650 systemd[1]: Started sshd@4-10.200.4.37:22-10.200.16.10:50200.service - OpenSSH per-connection server daemon (10.200.16.10:50200). Jan 14 01:10:54.352413 sshd[2900]: Accepted publickey for core from 10.200.16.10 port 50200 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:10:54.353600 sshd-session[2900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:54.358049 systemd-logind[2463]: New session 8 of user core. Jan 14 01:10:54.368166 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:10:54.693215 sudo[2905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:10:54.693460 sudo[2905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:10:54.716770 sudo[2905]: pam_unix(sudo:session): session closed for user root Jan 14 01:10:54.817240 sshd[2904]: Connection closed by 10.200.16.10 port 50200 Jan 14 01:10:54.818200 sshd-session[2900]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:54.822063 systemd-logind[2463]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:10:54.822412 systemd[1]: sshd@4-10.200.4.37:22-10.200.16.10:50200.service: Deactivated successfully. Jan 14 01:10:54.824356 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:10:54.825699 systemd-logind[2463]: Removed session 8. Jan 14 01:10:54.940042 systemd[1]: Started sshd@5-10.200.4.37:22-10.200.16.10:50206.service - OpenSSH per-connection server daemon (10.200.16.10:50206). Jan 14 01:10:55.482014 sshd[2912]: Accepted publickey for core from 10.200.16.10 port 50206 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:10:55.483054 sshd-session[2912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:55.487578 systemd-logind[2463]: New session 9 of user core. Jan 14 01:10:55.494134 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:10:55.613185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:10:55.614521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:10:55.688336 sudo[2921]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:10:55.688589 sudo[2921]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:10:55.772150 sudo[2921]: pam_unix(sudo:session): session closed for user root Jan 14 01:10:55.779547 sudo[2920]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:10:55.779842 sudo[2920]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:10:55.790302 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:10:55.920477 augenrules[2945]: No rules Jan 14 01:10:55.919000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:10:55.922171 kernel: kauditd_printk_skb: 159 callbacks suppressed Jan 14 01:10:55.922237 kernel: audit: type=1305 audit(1768353055.919:257): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:10:55.922458 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:10:55.922917 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:10:55.924109 sudo[2920]: pam_unix(sudo:session): session closed for user root Jan 14 01:10:55.919000 audit[2945]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe75fc05b0 a2=420 a3=0 items=0 ppid=2926 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:55.932382 kernel: audit: type=1300 audit(1768353055.919:257): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe75fc05b0 a2=420 a3=0 items=0 ppid=2926 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:55.932432 kernel: audit: type=1327 audit(1768353055.919:257): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:10:55.919000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:10:55.935688 kernel: audit: type=1106 audit(1768353055.921:258): pid=2920 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:10:55.921000 audit[2920]: USER_END pid=2920 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:10:55.938365 kernel: audit: type=1104 audit(1768353055.921:259): pid=2920 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:10:55.921000 audit[2920]: CRED_DISP pid=2920 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:10:55.941037 kernel: audit: type=1130 audit(1768353055.921:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:55.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:55.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:55.943193 kernel: audit: type=1131 audit(1768353055.921:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.027013 sshd[2916]: Connection closed by 10.200.16.10 port 50206 Jan 14 01:10:56.028140 sshd-session[2912]: pam_unix(sshd:session): session closed for user core Jan 14 01:10:56.028000 audit[2912]: USER_END pid=2912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.028000 audit[2912]: CRED_DISP pid=2912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.036020 kernel: audit: type=1106 audit(1768353056.028:262): pid=2912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.036111 kernel: audit: type=1104 audit(1768353056.028:263): pid=2912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.036457 systemd[1]: sshd@5-10.200.4.37:22-10.200.16.10:50206.service: Deactivated successfully. Jan 14 01:10:56.041415 kernel: audit: type=1131 audit(1768353056.035:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.37:22-10.200.16.10:50206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.37:22-10.200.16.10:50206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.040744 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:10:56.042552 systemd-logind[2463]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:10:56.043573 systemd-logind[2463]: Removed session 9. Jan 14 01:10:56.101968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:10:56.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.115166 (kubelet)[2958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:10:56.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.37:22-10.200.16.10:50214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.141245 systemd[1]: Started sshd@6-10.200.4.37:22-10.200.16.10:50214.service - OpenSSH per-connection server daemon (10.200.16.10:50214). Jan 14 01:10:56.149715 kubelet[2958]: E0114 01:10:56.149642 2958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:10:56.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:10:56.151927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:10:56.152085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:10:56.152417 systemd[1]: kubelet.service: Consumed 142ms CPU time, 108.2M memory peak. Jan 14 01:10:56.678000 audit[2965]: USER_ACCT pid=2965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.679493 sshd[2965]: Accepted publickey for core from 10.200.16.10 port 50214 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:10:56.679000 audit[2965]: CRED_ACQ pid=2965 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.679000 audit[2965]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd98580ba0 a2=3 a3=0 items=0 ppid=1 pid=2965 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:10:56.679000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:10:56.680710 sshd-session[2965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:10:56.685384 systemd-logind[2463]: New session 10 of user core. Jan 14 01:10:56.691152 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:10:56.692000 audit[2965]: USER_START pid=2965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.693000 audit[2970]: CRED_ACQ pid=2970 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:10:56.883000 audit[2971]: USER_ACCT pid=2971 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.885059 sudo[2971]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:10:56.884000 audit[2971]: CRED_REFR pid=2971 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.884000 audit[2971]: USER_START pid=2971 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:10:56.885318 sudo[2971]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:10:58.837289 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:10:58.848239 (dockerd)[2991]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:11:00.378237 dockerd[2991]: time="2026-01-14T01:11:00.378180452Z" level=info msg="Starting up" Jan 14 01:11:00.381445 dockerd[2991]: time="2026-01-14T01:11:00.381372619Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:11:00.391300 dockerd[2991]: time="2026-01-14T01:11:00.391259796Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:11:00.419479 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2536629997-merged.mount: Deactivated successfully. Jan 14 01:11:00.487575 dockerd[2991]: time="2026-01-14T01:11:00.487535913Z" level=info msg="Loading containers: start." Jan 14 01:11:00.518007 kernel: Initializing XFRM netlink socket Jan 14 01:11:00.541000 audit[3037]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.541000 audit[3037]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe30551c80 a2=0 a3=0 items=0 ppid=2991 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.541000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:11:00.543000 audit[3039]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.543000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffee6e048a0 a2=0 a3=0 items=0 ppid=2991 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:11:00.545000 audit[3041]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.545000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffc751f90 a2=0 a3=0 items=0 ppid=2991 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.545000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:11:00.546000 audit[3043]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.546000 audit[3043]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe11ce5520 a2=0 a3=0 items=0 ppid=2991 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.546000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:11:00.548000 audit[3045]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.548000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff70849290 a2=0 a3=0 items=0 ppid=2991 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.548000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:11:00.550000 audit[3047]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.550000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe88d5e330 a2=0 a3=0 items=0 ppid=2991 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.550000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:00.551000 audit[3049]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.551000 audit[3049]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc1a2b2fe0 a2=0 a3=0 items=0 ppid=2991 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:11:00.554000 audit[3051]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.554000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffec1fedb60 a2=0 a3=0 items=0 ppid=2991 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.554000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:11:00.596000 audit[3054]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.596000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffff3445430 a2=0 a3=0 items=0 ppid=2991 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:11:00.598000 audit[3056]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.598000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc85f08790 a2=0 a3=0 items=0 ppid=2991 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:11:00.600000 audit[3058]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.600000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe93b2ffd0 a2=0 a3=0 items=0 ppid=2991 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:11:00.602000 audit[3060]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.602000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffeb15ef0e0 a2=0 a3=0 items=0 ppid=2991 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:00.603000 audit[3062]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.603000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdd68c8340 a2=0 a3=0 items=0 ppid=2991 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.603000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:11:00.663000 audit[3092]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.663000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff56beab50 a2=0 a3=0 items=0 ppid=2991 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:11:00.665000 audit[3094]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.665000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd45038d60 a2=0 a3=0 items=0 ppid=2991 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:11:00.666000 audit[3096]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.666000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5dc07130 a2=0 a3=0 items=0 ppid=2991 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.666000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:11:00.668000 audit[3098]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.668000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd678c7210 a2=0 a3=0 items=0 ppid=2991 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:11:00.670000 audit[3100]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.670000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb66e40f0 a2=0 a3=0 items=0 ppid=2991 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:11:00.672000 audit[3102]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.672000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd0c7d31f0 a2=0 a3=0 items=0 ppid=2991 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.672000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:00.673000 audit[3104]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.673000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe49446730 a2=0 a3=0 items=0 ppid=2991 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:11:00.675000 audit[3106]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.675000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe9918db20 a2=0 a3=0 items=0 ppid=2991 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:11:00.677000 audit[3108]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.677000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe7eddd540 a2=0 a3=0 items=0 ppid=2991 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.677000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:11:00.679000 audit[3110]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.679000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd2267a280 a2=0 a3=0 items=0 ppid=2991 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:11:00.681000 audit[3112]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.681000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff5f8da100 a2=0 a3=0 items=0 ppid=2991 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:11:00.683000 audit[3114]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.683000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc080c1150 a2=0 a3=0 items=0 ppid=2991 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.683000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:11:00.685000 audit[3116]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.685000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff2e3ba280 a2=0 a3=0 items=0 ppid=2991 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:11:00.689000 audit[3121]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.689000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca72d84c0 a2=0 a3=0 items=0 ppid=2991 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:11:00.691000 audit[3123]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.691000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe99e017a0 a2=0 a3=0 items=0 ppid=2991 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:11:00.692000 audit[3125]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.692000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd7015af10 a2=0 a3=0 items=0 ppid=2991 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:11:00.694000 audit[3127]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.694000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0da2f960 a2=0 a3=0 items=0 ppid=2991 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:11:00.696000 audit[3129]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.696000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc22a2a1c0 a2=0 a3=0 items=0 ppid=2991 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:11:00.698000 audit[3131]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:00.698000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd4eaf9900 a2=0 a3=0 items=0 ppid=2991 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:11:00.759000 audit[3136]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.759000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffeb7adbb80 a2=0 a3=0 items=0 ppid=2991 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.759000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:11:00.761000 audit[3138]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.761000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffce04457a0 a2=0 a3=0 items=0 ppid=2991 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:11:00.769000 audit[3146]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.769000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc0ad96f60 a2=0 a3=0 items=0 ppid=2991 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:11:00.773000 audit[3151]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.773000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffc8b7c930 a2=0 a3=0 items=0 ppid=2991 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:11:00.775000 audit[3153]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.775000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff5d2d4240 a2=0 a3=0 items=0 ppid=2991 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:11:00.777000 audit[3155]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.777000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffd348ff20 a2=0 a3=0 items=0 ppid=2991 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.777000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:11:00.779000 audit[3157]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.779000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdc9711370 a2=0 a3=0 items=0 ppid=2991 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:11:00.781000 audit[3159]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:00.781000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8dd378b0 a2=0 a3=0 items=0 ppid=2991 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:00.781000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:11:00.782694 systemd-networkd[2117]: docker0: Link UP Jan 14 01:11:00.797888 dockerd[2991]: time="2026-01-14T01:11:00.797856365Z" level=info msg="Loading containers: done." Jan 14 01:11:00.868303 dockerd[2991]: time="2026-01-14T01:11:00.868260093Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:11:00.868450 dockerd[2991]: time="2026-01-14T01:11:00.868357604Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:11:00.868450 dockerd[2991]: time="2026-01-14T01:11:00.868438576Z" level=info msg="Initializing buildkit" Jan 14 01:11:00.915475 dockerd[2991]: time="2026-01-14T01:11:00.914509313Z" level=info msg="Completed buildkit initialization" Jan 14 01:11:00.921522 dockerd[2991]: time="2026-01-14T01:11:00.921487858Z" level=info msg="Daemon has completed initialization" Jan 14 01:11:00.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:00.922597 dockerd[2991]: time="2026-01-14T01:11:00.922141349Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:11:00.921774 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:11:00.923805 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 14 01:11:00.924364 kernel: audit: type=1130 audit(1768353060.920:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:01.982306 containerd[2498]: time="2026-01-14T01:11:01.982257141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:11:02.773865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611889191.mount: Deactivated successfully. Jan 14 01:11:03.506995 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 01:11:04.010535 containerd[2498]: time="2026-01-14T01:11:04.010486377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:04.016996 containerd[2498]: time="2026-01-14T01:11:04.016819966Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28990465" Jan 14 01:11:04.033082 containerd[2498]: time="2026-01-14T01:11:04.033038483Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:04.042758 containerd[2498]: time="2026-01-14T01:11:04.042714752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:04.043596 containerd[2498]: time="2026-01-14T01:11:04.043404781Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.06110818s" Jan 14 01:11:04.043596 containerd[2498]: time="2026-01-14T01:11:04.043441195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 01:11:04.044025 containerd[2498]: time="2026-01-14T01:11:04.044009337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:11:05.527181 containerd[2498]: time="2026-01-14T01:11:05.527129714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:05.529478 containerd[2498]: time="2026-01-14T01:11:05.529442347Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 01:11:05.532230 containerd[2498]: time="2026-01-14T01:11:05.532189910Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:05.535837 containerd[2498]: time="2026-01-14T01:11:05.535702965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:05.536839 containerd[2498]: time="2026-01-14T01:11:05.536376188Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.492293742s" Jan 14 01:11:05.536839 containerd[2498]: time="2026-01-14T01:11:05.536405650Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 01:11:05.536935 containerd[2498]: time="2026-01-14T01:11:05.536887908Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:11:06.363375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 01:11:06.366160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:07.035048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:07.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:07.041005 kernel: audit: type=1130 audit(1768353067.034:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:07.041053 (kubelet)[3269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:11:07.082573 kubelet[3269]: E0114 01:11:07.082506 3269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:11:07.084743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:11:07.084877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:11:07.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:07.085264 systemd[1]: kubelet.service: Consumed 127ms CPU time, 108.4M memory peak. Jan 14 01:11:07.090002 kernel: audit: type=1131 audit(1768353067.084:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:07.419843 containerd[2498]: time="2026-01-14T01:11:07.419797323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:07.423018 containerd[2498]: time="2026-01-14T01:11:07.422990319Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 01:11:07.426362 containerd[2498]: time="2026-01-14T01:11:07.426201062Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:07.433996 containerd[2498]: time="2026-01-14T01:11:07.433933369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:07.434843 containerd[2498]: time="2026-01-14T01:11:07.434815296Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.897905877s" Jan 14 01:11:07.434898 containerd[2498]: time="2026-01-14T01:11:07.434846005Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 01:11:07.435487 containerd[2498]: time="2026-01-14T01:11:07.435465943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:11:07.536077 update_engine[2464]: I20260114 01:11:07.536021 2464 update_attempter.cc:509] Updating boot flags... Jan 14 01:11:08.551213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492775125.mount: Deactivated successfully. Jan 14 01:11:08.953061 containerd[2498]: time="2026-01-14T01:11:08.953014696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:08.955813 containerd[2498]: time="2026-01-14T01:11:08.955716697Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 14 01:11:08.958790 containerd[2498]: time="2026-01-14T01:11:08.958761667Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:08.963428 containerd[2498]: time="2026-01-14T01:11:08.963383543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:08.963967 containerd[2498]: time="2026-01-14T01:11:08.963703249Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.52821023s" Jan 14 01:11:08.963967 containerd[2498]: time="2026-01-14T01:11:08.963732695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 01:11:08.964299 containerd[2498]: time="2026-01-14T01:11:08.964275248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:11:09.531516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686038087.mount: Deactivated successfully. Jan 14 01:11:10.566493 containerd[2498]: time="2026-01-14T01:11:10.566442249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:10.569817 containerd[2498]: time="2026-01-14T01:11:10.569739305Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 14 01:11:10.572758 containerd[2498]: time="2026-01-14T01:11:10.572718202Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:10.576998 containerd[2498]: time="2026-01-14T01:11:10.576871728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:10.577724 containerd[2498]: time="2026-01-14T01:11:10.577564428Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.613260601s" Jan 14 01:11:10.577724 containerd[2498]: time="2026-01-14T01:11:10.577597804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 01:11:10.578097 containerd[2498]: time="2026-01-14T01:11:10.578080475Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:11:11.049099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349520504.mount: Deactivated successfully. Jan 14 01:11:11.088283 containerd[2498]: time="2026-01-14T01:11:11.088226727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:11.090877 containerd[2498]: time="2026-01-14T01:11:11.090738749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:11:11.094690 containerd[2498]: time="2026-01-14T01:11:11.094662021Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:11.099698 containerd[2498]: time="2026-01-14T01:11:11.099181259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:11:11.099698 containerd[2498]: time="2026-01-14T01:11:11.099577177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.420957ms" Jan 14 01:11:11.099698 containerd[2498]: time="2026-01-14T01:11:11.099603791Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:11:11.100346 containerd[2498]: time="2026-01-14T01:11:11.100325377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:11:11.635218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2419429032.mount: Deactivated successfully. Jan 14 01:11:15.303694 waagent[2684]: 2026-01-14T01:11:15.303634Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 14 01:11:15.312505 waagent[2684]: 2026-01-14T01:11:15.312465Z INFO ExtHandler Jan 14 01:11:15.312600 waagent[2684]: 2026-01-14T01:11:15.312559Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1315e26d-cc48-4455-9a1f-b55e5d504a19 eTag: 13751889888822332037 source: Fabric] Jan 14 01:11:15.312849 waagent[2684]: 2026-01-14T01:11:15.312817Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 01:11:15.313478 waagent[2684]: 2026-01-14T01:11:15.313442Z INFO ExtHandler Jan 14 01:11:15.313538 waagent[2684]: 2026-01-14T01:11:15.313506Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 14 01:11:15.395069 waagent[2684]: 2026-01-14T01:11:15.395037Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 01:11:15.449514 waagent[2684]: 2026-01-14T01:11:15.449461Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F11CF3AFFF91E2BDF8574E92BC87D34D69537E41', 'hasPrivateKey': True} Jan 14 01:11:15.449870 waagent[2684]: 2026-01-14T01:11:15.449838Z INFO ExtHandler Fetch goal state completed Jan 14 01:11:15.450201 waagent[2684]: 2026-01-14T01:11:15.450170Z INFO ExtHandler ExtHandler Jan 14 01:11:15.450261 waagent[2684]: 2026-01-14T01:11:15.450236Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 3388cf20-db38-4dca-a81e-3305c15e4ec0 correlation b90b04bb-c588-4148-908f-be507dd42f44 created: 2026-01-14T01:11:07.838913Z] Jan 14 01:11:15.450484 waagent[2684]: 2026-01-14T01:11:15.450459Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 01:11:15.450908 waagent[2684]: 2026-01-14T01:11:15.450883Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 14 01:11:17.113123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 01:11:17.116218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:17.654089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:17.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:17.660039 kernel: audit: type=1130 audit(1768353077.653:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:17.669209 (kubelet)[3431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:11:17.704292 kubelet[3431]: E0114 01:11:17.704258 3431 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:11:17.705966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:11:17.706112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:11:17.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:17.706476 systemd[1]: kubelet.service: Consumed 130ms CPU time, 110.5M memory peak. Jan 14 01:11:17.711172 kernel: audit: type=1131 audit(1768353077.705:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:27.863217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 01:11:27.864735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:28.364204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:28.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:28.370688 (kubelet)[3446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:11:28.370989 kernel: audit: type=1130 audit(1768353088.363:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:28.404846 kubelet[3446]: E0114 01:11:28.404803 3446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:11:28.406530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:11:28.406679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:11:28.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:28.407046 systemd[1]: kubelet.service: Consumed 131ms CPU time, 107.9M memory peak. Jan 14 01:11:28.411997 kernel: audit: type=1131 audit(1768353088.406:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:38.613206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 01:11:38.614735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:39.102271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:39.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:39.109041 kernel: audit: type=1130 audit(1768353099.101:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:39.109257 (kubelet)[3461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:11:39.143517 kubelet[3461]: E0114 01:11:39.143480 3461 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:11:39.145189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:11:39.145324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:11:39.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:39.145685 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.5M memory peak. Jan 14 01:11:39.148998 kernel: audit: type=1131 audit(1768353099.144:324): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:41.013356 containerd[2498]: time="2026-01-14T01:11:41.013309321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:41.016181 containerd[2498]: time="2026-01-14T01:11:41.016149851Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Jan 14 01:11:41.020142 containerd[2498]: time="2026-01-14T01:11:41.020101009Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:41.024452 containerd[2498]: time="2026-01-14T01:11:41.024397738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:11:41.025388 containerd[2498]: time="2026-01-14T01:11:41.025083262Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 29.924732005s" Jan 14 01:11:41.025388 containerd[2498]: time="2026-01-14T01:11:41.025115105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 01:11:43.779943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:43.780498 systemd[1]: kubelet.service: Consumed 131ms CPU time, 108.5M memory peak. Jan 14 01:11:43.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:43.790500 kernel: audit: type=1130 audit(1768353103.779:325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:43.790568 kernel: audit: type=1131 audit(1768353103.779:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:43.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:43.786232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:43.813128 systemd[1]: Reload requested from client PID 3503 ('systemctl') (unit session-10.scope)... Jan 14 01:11:43.813146 systemd[1]: Reloading... Jan 14 01:11:43.905997 zram_generator::config[3553]: No configuration found. Jan 14 01:11:44.105095 systemd[1]: Reloading finished in 291 ms. Jan 14 01:11:44.270405 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:11:44.270511 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:11:44.270858 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:44.270931 systemd[1]: kubelet.service: Consumed 77ms CPU time, 78.1M memory peak. Jan 14 01:11:44.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:44.278005 kernel: audit: type=1130 audit(1768353104.269:327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:11:44.276260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:44.276000 audit: BPF prog-id=87 op=LOAD Jan 14 01:11:44.276000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:11:44.283207 kernel: audit: type=1334 audit(1768353104.276:328): prog-id=87 op=LOAD Jan 14 01:11:44.283258 kernel: audit: type=1334 audit(1768353104.276:329): prog-id=75 op=UNLOAD Jan 14 01:11:44.276000 audit: BPF prog-id=88 op=LOAD Jan 14 01:11:44.288997 kernel: audit: type=1334 audit(1768353104.276:330): prog-id=88 op=LOAD Jan 14 01:11:44.276000 audit: BPF prog-id=89 op=LOAD Jan 14 01:11:44.276000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:11:44.291804 kernel: audit: type=1334 audit(1768353104.276:331): prog-id=89 op=LOAD Jan 14 01:11:44.291846 kernel: audit: type=1334 audit(1768353104.276:332): prog-id=76 op=UNLOAD Jan 14 01:11:44.276000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:11:44.293449 kernel: audit: type=1334 audit(1768353104.276:333): prog-id=77 op=UNLOAD Jan 14 01:11:44.277000 audit: BPF prog-id=90 op=LOAD Jan 14 01:11:44.294844 kernel: audit: type=1334 audit(1768353104.277:334): prog-id=90 op=LOAD Jan 14 01:11:44.277000 audit: BPF prog-id=91 op=LOAD Jan 14 01:11:44.296319 kernel: audit: type=1334 audit(1768353104.277:335): prog-id=91 op=LOAD Jan 14 01:11:44.277000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:11:44.297752 kernel: audit: type=1334 audit(1768353104.277:336): prog-id=79 op=UNLOAD Jan 14 01:11:44.277000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:11:44.279000 audit: BPF prog-id=92 op=LOAD Jan 14 01:11:44.279000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:11:44.279000 audit: BPF prog-id=93 op=LOAD Jan 14 01:11:44.279000 audit: BPF prog-id=94 op=LOAD Jan 14 01:11:44.279000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:11:44.279000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:11:44.280000 audit: BPF prog-id=95 op=LOAD Jan 14 01:11:44.280000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:11:44.280000 audit: BPF prog-id=96 op=LOAD Jan 14 01:11:44.280000 audit: BPF prog-id=97 op=LOAD Jan 14 01:11:44.280000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:11:44.280000 audit: BPF prog-id=86 op=UNLOAD Jan 14 01:11:44.281000 audit: BPF prog-id=98 op=LOAD Jan 14 01:11:44.281000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:11:44.281000 audit: BPF prog-id=99 op=LOAD Jan 14 01:11:44.281000 audit: BPF prog-id=100 op=LOAD Jan 14 01:11:44.281000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:11:44.281000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:11:44.281000 audit: BPF prog-id=101 op=LOAD Jan 14 01:11:44.281000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:11:44.282000 audit: BPF prog-id=102 op=LOAD Jan 14 01:11:44.282000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:11:44.283000 audit: BPF prog-id=103 op=LOAD Jan 14 01:11:44.283000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:11:44.285000 audit: BPF prog-id=104 op=LOAD Jan 14 01:11:44.285000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:11:44.285000 audit: BPF prog-id=105 op=LOAD Jan 14 01:11:44.285000 audit: BPF prog-id=106 op=LOAD Jan 14 01:11:44.285000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:11:44.285000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:11:44.824218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:44.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:44.834211 (kubelet)[3620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:11:44.870267 kubelet[3620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:11:44.870267 kubelet[3620]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:11:44.870267 kubelet[3620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:11:44.870541 kubelet[3620]: I0114 01:11:44.870332 3620 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:11:45.332007 kubelet[3620]: I0114 01:11:45.330285 3620 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:11:45.332007 kubelet[3620]: I0114 01:11:45.330314 3620 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:11:45.332007 kubelet[3620]: I0114 01:11:45.330728 3620 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:11:45.361096 kubelet[3620]: I0114 01:11:45.360917 3620 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:11:45.361589 kubelet[3620]: E0114 01:11:45.361567 3620 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:11:45.368738 kubelet[3620]: I0114 01:11:45.368572 3620 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:11:45.372629 kubelet[3620]: I0114 01:11:45.372607 3620 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:11:45.372855 kubelet[3620]: I0114 01:11:45.372817 3620 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:11:45.373007 kubelet[3620]: I0114 01:11:45.372852 3620 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-4dd79cf71d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:11:45.373140 kubelet[3620]: I0114 01:11:45.373012 3620 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:11:45.373140 kubelet[3620]: I0114 01:11:45.373022 3620 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:11:45.373140 kubelet[3620]: I0114 01:11:45.373125 3620 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:45.376528 kubelet[3620]: I0114 01:11:45.376306 3620 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:11:45.376528 kubelet[3620]: I0114 01:11:45.376339 3620 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:11:45.376528 kubelet[3620]: I0114 01:11:45.376367 3620 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:11:45.376528 kubelet[3620]: I0114 01:11:45.376381 3620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:11:45.384251 kubelet[3620]: E0114 01:11:45.384225 3620 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-4dd79cf71d&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:11:45.388180 kubelet[3620]: E0114 01:11:45.388131 3620 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:11:45.388313 kubelet[3620]: I0114 01:11:45.388260 3620 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:11:45.389014 kubelet[3620]: I0114 01:11:45.388659 3620 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:11:45.389633 kubelet[3620]: W0114 01:11:45.389610 3620 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:11:45.391521 kubelet[3620]: I0114 01:11:45.391501 3620 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:11:45.391584 kubelet[3620]: I0114 01:11:45.391548 3620 server.go:1289] "Started kubelet" Jan 14 01:11:45.392074 kubelet[3620]: I0114 01:11:45.392032 3620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:11:45.392888 kubelet[3620]: I0114 01:11:45.392466 3620 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:11:45.392888 kubelet[3620]: I0114 01:11:45.392467 3620 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:11:45.394326 kubelet[3620]: I0114 01:11:45.394309 3620 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:11:45.397883 kubelet[3620]: I0114 01:11:45.397865 3620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:11:45.402896 kubelet[3620]: E0114 01:11:45.401143 3620 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578.0.0-p-4dd79cf71d.188a73c7bc52031e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578.0.0-p-4dd79cf71d,UID:ci-4578.0.0-p-4dd79cf71d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-4dd79cf71d,},FirstTimestamp:2026-01-14 01:11:45.391518494 +0000 UTC m=+0.553845199,LastTimestamp:2026-01-14 01:11:45.391518494 +0000 UTC m=+0.553845199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-4dd79cf71d,}" Jan 14 01:11:45.403626 kubelet[3620]: I0114 01:11:45.403605 3620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:11:45.406238 kubelet[3620]: E0114 01:11:45.406220 3620 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" Jan 14 01:11:45.407274 kubelet[3620]: I0114 01:11:45.407186 3620 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:11:45.407480 kubelet[3620]: I0114 01:11:45.407469 3620 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:11:45.407586 kubelet[3620]: I0114 01:11:45.407579 3620 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:11:45.408000 audit[3635]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.408000 audit[3635]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe186aad80 a2=0 a3=0 items=0 ppid=3620 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.408000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:11:45.410079 kubelet[3620]: E0114 01:11:45.410061 3620 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:11:45.410240 kubelet[3620]: E0114 01:11:45.410213 3620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-4dd79cf71d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="200ms" Jan 14 01:11:45.410000 audit[3636]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.410000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb9b7fdc0 a2=0 a3=0 items=0 ppid=3620 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:11:45.412579 kubelet[3620]: I0114 01:11:45.412180 3620 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:11:45.412579 kubelet[3620]: I0114 01:11:45.412248 3620 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:11:45.414846 kubelet[3620]: I0114 01:11:45.414832 3620 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:11:45.414000 audit[3638]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.414000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd34311f0 a2=0 a3=0 items=0 ppid=3620 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.414000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:45.416000 audit[3640]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.416000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffd3c82810 a2=0 a3=0 items=0 ppid=3620 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:45.434377 kubelet[3620]: E0114 01:11:45.434352 3620 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:11:45.439846 kubelet[3620]: I0114 01:11:45.439829 3620 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:11:45.439846 kubelet[3620]: I0114 01:11:45.439841 3620 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:11:45.439950 kubelet[3620]: I0114 01:11:45.439855 3620 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:45.507285 kubelet[3620]: E0114 01:11:45.507258 3620 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" Jan 14 01:11:45.533000 audit[3646]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.533000 audit[3646]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd4537b990 a2=0 a3=0 items=0 ppid=3620 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.533000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:11:45.534000 audit[3648]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:45.535000 audit[3647]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.535000 audit[3647]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff1aec350 a2=0 a3=0 items=0 ppid=3620 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.535000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:11:45.534000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcfe79f470 a2=0 a3=0 items=0 ppid=3620 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.534000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:11:45.537000 audit[3651]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.537000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef9e96750 a2=0 a3=0 items=0 ppid=3620 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:11:45.537000 audit[3652]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:45.537000 audit[3652]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffebc1be00 a2=0 a3=0 items=0 ppid=3620 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.537000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:11:45.538000 audit[3653]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3653 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:45.538000 audit[3653]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeafb32a10 a2=0 a3=0 items=0 ppid=3620 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.538000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:11:45.538000 audit[3654]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:45.538000 audit[3654]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc86e072f0 a2=0 a3=0 items=0 ppid=3620 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.538000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:11:45.539000 audit[3655]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3655 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:45.539000 audit[3655]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6e74c670 a2=0 a3=0 items=0 ppid=3620 pid=3655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:45.539000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:11:45.566449 kubelet[3620]: I0114 01:11:45.534521 3620 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:11:45.566449 kubelet[3620]: I0114 01:11:45.536650 3620 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:11:45.566449 kubelet[3620]: I0114 01:11:45.536669 3620 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:11:45.566449 kubelet[3620]: I0114 01:11:45.536690 3620 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:11:45.566449 kubelet[3620]: I0114 01:11:45.536697 3620 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:11:45.566449 kubelet[3620]: E0114 01:11:45.536746 3620 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:11:45.566449 kubelet[3620]: E0114 01:11:45.537497 3620 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:11:45.608321 kubelet[3620]: E0114 01:11:45.608229 3620 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" Jan 14 01:11:45.610737 kubelet[3620]: E0114 01:11:45.610711 3620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-4dd79cf71d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="400ms" Jan 14 01:11:45.637774 kubelet[3620]: E0114 01:11:45.637748 3620 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 01:11:45.655303 kubelet[3620]: I0114 01:11:45.655279 3620 policy_none.go:49] "None policy: Start" Jan 14 01:11:45.655303 kubelet[3620]: I0114 01:11:45.655301 3620 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:11:45.655383 kubelet[3620]: I0114 01:11:45.655313 3620 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:11:45.666734 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:11:45.674084 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:11:45.676729 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:11:45.681543 kubelet[3620]: E0114 01:11:45.681525 3620 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:11:45.681702 kubelet[3620]: I0114 01:11:45.681692 3620 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:11:45.681737 kubelet[3620]: I0114 01:11:45.681706 3620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:11:45.682482 kubelet[3620]: I0114 01:11:45.682223 3620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:11:45.684352 kubelet[3620]: E0114 01:11:45.684337 3620 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:11:45.684618 kubelet[3620]: E0114 01:11:45.684602 3620 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4578.0.0-p-4dd79cf71d\" not found" Jan 14 01:11:45.784145 kubelet[3620]: I0114 01:11:45.784110 3620 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:45.784470 kubelet[3620]: E0114 01:11:45.784450 3620 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.37:6443/api/v1/nodes\": dial tcp 10.200.4.37:6443: connect: connection refused" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:45.910231 kubelet[3620]: I0114 01:11:45.910177 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e522e5ad6dd5675079c5f93edc6caa2b-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" (UID: \"e522e5ad6dd5675079c5f93edc6caa2b\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:45.910231 kubelet[3620]: I0114 01:11:45.910234 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e522e5ad6dd5675079c5f93edc6caa2b-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" (UID: \"e522e5ad6dd5675079c5f93edc6caa2b\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:45.910606 kubelet[3620]: I0114 01:11:45.910255 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e522e5ad6dd5675079c5f93edc6caa2b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" (UID: \"e522e5ad6dd5675079c5f93edc6caa2b\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:45.923062 systemd[1]: Created slice kubepods-burstable-pode522e5ad6dd5675079c5f93edc6caa2b.slice - libcontainer container kubepods-burstable-pode522e5ad6dd5675079c5f93edc6caa2b.slice. Jan 14 01:11:45.930486 kubelet[3620]: E0114 01:11:45.930460 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:45.986915 kubelet[3620]: I0114 01:11:45.986888 3620 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:45.987247 kubelet[3620]: E0114 01:11:45.987228 3620 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.37:6443/api/v1/nodes\": dial tcp 10.200.4.37:6443: connect: connection refused" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.010464 kubelet[3620]: I0114 01:11:46.010363 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.010464 kubelet[3620]: I0114 01:11:46.010396 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.010464 kubelet[3620]: I0114 01:11:46.010416 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.010587 kubelet[3620]: I0114 01:11:46.010539 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.010587 kubelet[3620]: I0114 01:11:46.010560 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.011088 kubelet[3620]: E0114 01:11:46.011029 3620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-4dd79cf71d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="800ms" Jan 14 01:11:46.016274 systemd[1]: Created slice kubepods-burstable-pod86b39a24d146fe18d81a149ddb6c341e.slice - libcontainer container kubepods-burstable-pod86b39a24d146fe18d81a149ddb6c341e.slice. Jan 14 01:11:46.017886 kubelet[3620]: E0114 01:11:46.017866 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.111338 kubelet[3620]: I0114 01:11:46.111181 3620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6707ed61df7304bbe62fa06ed8dd20ae-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-4dd79cf71d\" (UID: \"6707ed61df7304bbe62fa06ed8dd20ae\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.183537 systemd[1]: Created slice kubepods-burstable-pod6707ed61df7304bbe62fa06ed8dd20ae.slice - libcontainer container kubepods-burstable-pod6707ed61df7304bbe62fa06ed8dd20ae.slice. Jan 14 01:11:46.185599 kubelet[3620]: E0114 01:11:46.185572 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.231807 containerd[2498]: time="2026-01-14T01:11:46.231768904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-4dd79cf71d,Uid:e522e5ad6dd5675079c5f93edc6caa2b,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:46.319663 containerd[2498]: time="2026-01-14T01:11:46.319620503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-4dd79cf71d,Uid:86b39a24d146fe18d81a149ddb6c341e,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:46.380285 kubelet[3620]: E0114 01:11:46.380251 3620 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-4dd79cf71d&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:11:46.388904 kubelet[3620]: I0114 01:11:46.388881 3620 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.389166 kubelet[3620]: E0114 01:11:46.389141 3620 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.37:6443/api/v1/nodes\": dial tcp 10.200.4.37:6443: connect: connection refused" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:46.469303 kubelet[3620]: E0114 01:11:46.469221 3620 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:11:46.486943 containerd[2498]: time="2026-01-14T01:11:46.486907409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-4dd79cf71d,Uid:6707ed61df7304bbe62fa06ed8dd20ae,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:46.553916 containerd[2498]: time="2026-01-14T01:11:46.553849018Z" level=info msg="connecting to shim 8ec1c800d5115b64dbc6dd69501b8b0bfc986a2f40f20d4a5ae552df59f76708" address="unix:///run/containerd/s/45a6e9cf896f4e8260fe2bd57101dd2a3cdb7a46efcc7b17e89f2f770005ae99" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:46.554760 containerd[2498]: time="2026-01-14T01:11:46.554713676Z" level=info msg="connecting to shim 8408d3bee77408b6e2076685634564c84c3386e5d56b620af92b4cb8d522c1a0" address="unix:///run/containerd/s/6b11e38c010a3e9d2c23f703dfd081dc2d7e4e619a26e8d781b8f9c99d06c7cd" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:46.586163 systemd[1]: Started cri-containerd-8408d3bee77408b6e2076685634564c84c3386e5d56b620af92b4cb8d522c1a0.scope - libcontainer container 8408d3bee77408b6e2076685634564c84c3386e5d56b620af92b4cb8d522c1a0. Jan 14 01:11:46.587864 systemd[1]: Started cri-containerd-8ec1c800d5115b64dbc6dd69501b8b0bfc986a2f40f20d4a5ae552df59f76708.scope - libcontainer container 8ec1c800d5115b64dbc6dd69501b8b0bfc986a2f40f20d4a5ae552df59f76708. Jan 14 01:11:46.601000 audit: BPF prog-id=107 op=LOAD Jan 14 01:11:46.601000 audit: BPF prog-id=108 op=LOAD Jan 14 01:11:46.601000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3680 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303864336265653737343038623665323037363638353633343536 Jan 14 01:11:46.602000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:11:46.602000 audit[3698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3680 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303864336265653737343038623665323037363638353633343536 Jan 14 01:11:46.602000 audit: BPF prog-id=109 op=LOAD Jan 14 01:11:46.602000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3680 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303864336265653737343038623665323037363638353633343536 Jan 14 01:11:46.602000 audit: BPF prog-id=110 op=LOAD Jan 14 01:11:46.602000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3680 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303864336265653737343038623665323037363638353633343536 Jan 14 01:11:46.603000 audit: BPF prog-id=110 op=UNLOAD Jan 14 01:11:46.603000 audit[3698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3680 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303864336265653737343038623665323037363638353633343536 Jan 14 01:11:46.603000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:11:46.603000 audit[3698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3680 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303864336265653737343038623665323037363638353633343536 Jan 14 01:11:46.603000 audit: BPF prog-id=111 op=LOAD Jan 14 01:11:46.603000 audit[3698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3680 pid=3698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834303864336265653737343038623665323037363638353633343536 Jan 14 01:11:46.604000 audit: BPF prog-id=112 op=LOAD Jan 14 01:11:46.605000 audit: BPF prog-id=113 op=LOAD Jan 14 01:11:46.605000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3668 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865633163383030643531313562363464626336646436393530316238 Jan 14 01:11:46.605000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:11:46.605000 audit[3694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3668 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865633163383030643531313562363464626336646436393530316238 Jan 14 01:11:46.605000 audit: BPF prog-id=114 op=LOAD Jan 14 01:11:46.605000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3668 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865633163383030643531313562363464626336646436393530316238 Jan 14 01:11:46.605000 audit: BPF prog-id=115 op=LOAD Jan 14 01:11:46.605000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3668 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865633163383030643531313562363464626336646436393530316238 Jan 14 01:11:46.606000 audit: BPF prog-id=115 op=UNLOAD Jan 14 01:11:46.606000 audit[3694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3668 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865633163383030643531313562363464626336646436393530316238 Jan 14 01:11:46.606000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:11:46.606000 audit[3694]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3668 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865633163383030643531313562363464626336646436393530316238 Jan 14 01:11:46.606000 audit: BPF prog-id=116 op=LOAD Jan 14 01:11:46.606000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3668 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865633163383030643531313562363464626336646436393530316238 Jan 14 01:11:46.613009 containerd[2498]: time="2026-01-14T01:11:46.612704937Z" level=info msg="connecting to shim 0c5af01a6e4e8bc75cdd484dcfbf0352163f3e224df5b51264265478ae93bf96" address="unix:///run/containerd/s/5191682bd24a2e68267025dd034d7b1ca84c17527e1a65acbaf906b9b0127857" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:46.645152 systemd[1]: Started cri-containerd-0c5af01a6e4e8bc75cdd484dcfbf0352163f3e224df5b51264265478ae93bf96.scope - libcontainer container 0c5af01a6e4e8bc75cdd484dcfbf0352163f3e224df5b51264265478ae93bf96. Jan 14 01:11:46.672359 containerd[2498]: time="2026-01-14T01:11:46.672335205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-4dd79cf71d,Uid:86b39a24d146fe18d81a149ddb6c341e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ec1c800d5115b64dbc6dd69501b8b0bfc986a2f40f20d4a5ae552df59f76708\"" Jan 14 01:11:46.676268 containerd[2498]: time="2026-01-14T01:11:46.676244495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-4dd79cf71d,Uid:e522e5ad6dd5675079c5f93edc6caa2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8408d3bee77408b6e2076685634564c84c3386e5d56b620af92b4cb8d522c1a0\"" Jan 14 01:11:46.676000 audit: BPF prog-id=117 op=LOAD Jan 14 01:11:46.676000 audit: BPF prog-id=118 op=LOAD Jan 14 01:11:46.676000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3742 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356166303161366534653862633735636464343834646366626630 Jan 14 01:11:46.676000 audit: BPF prog-id=118 op=UNLOAD Jan 14 01:11:46.676000 audit[3754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356166303161366534653862633735636464343834646366626630 Jan 14 01:11:46.676000 audit: BPF prog-id=119 op=LOAD Jan 14 01:11:46.676000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3742 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356166303161366534653862633735636464343834646366626630 Jan 14 01:11:46.676000 audit: BPF prog-id=120 op=LOAD Jan 14 01:11:46.676000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3742 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356166303161366534653862633735636464343834646366626630 Jan 14 01:11:46.676000 audit: BPF prog-id=120 op=UNLOAD Jan 14 01:11:46.676000 audit[3754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356166303161366534653862633735636464343834646366626630 Jan 14 01:11:46.676000 audit: BPF prog-id=119 op=UNLOAD Jan 14 01:11:46.676000 audit[3754]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356166303161366534653862633735636464343834646366626630 Jan 14 01:11:46.677000 audit: BPF prog-id=121 op=LOAD Jan 14 01:11:46.677000 audit[3754]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3742 pid=3754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063356166303161366534653862633735636464343834646366626630 Jan 14 01:11:46.681491 containerd[2498]: time="2026-01-14T01:11:46.680664170Z" level=info msg="CreateContainer within sandbox \"8ec1c800d5115b64dbc6dd69501b8b0bfc986a2f40f20d4a5ae552df59f76708\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:11:46.686563 containerd[2498]: time="2026-01-14T01:11:46.685523299Z" level=info msg="CreateContainer within sandbox \"8408d3bee77408b6e2076685634564c84c3386e5d56b620af92b4cb8d522c1a0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:11:46.715654 containerd[2498]: time="2026-01-14T01:11:46.715628560Z" level=info msg="Container 0b6e46d31b3e60f79ab83f3102f1d0106d5498faab3fe33a4c4639a41d2e2f51: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:46.718642 containerd[2498]: time="2026-01-14T01:11:46.718618477Z" level=info msg="Container 5aaf7ad6ba8a0586847a9de9e07b6c70ddac9d649243acadbfddf0ba7221dfb6: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:46.722504 containerd[2498]: time="2026-01-14T01:11:46.722412422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-4dd79cf71d,Uid:6707ed61df7304bbe62fa06ed8dd20ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c5af01a6e4e8bc75cdd484dcfbf0352163f3e224df5b51264265478ae93bf96\"" Jan 14 01:11:46.731323 containerd[2498]: time="2026-01-14T01:11:46.731295092Z" level=info msg="CreateContainer within sandbox \"0c5af01a6e4e8bc75cdd484dcfbf0352163f3e224df5b51264265478ae93bf96\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:11:46.735811 kubelet[3620]: E0114 01:11:46.735784 3620 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:11:46.739086 containerd[2498]: time="2026-01-14T01:11:46.739061540Z" level=info msg="CreateContainer within sandbox \"8408d3bee77408b6e2076685634564c84c3386e5d56b620af92b4cb8d522c1a0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0b6e46d31b3e60f79ab83f3102f1d0106d5498faab3fe33a4c4639a41d2e2f51\"" Jan 14 01:11:46.739507 containerd[2498]: time="2026-01-14T01:11:46.739486272Z" level=info msg="StartContainer for \"0b6e46d31b3e60f79ab83f3102f1d0106d5498faab3fe33a4c4639a41d2e2f51\"" Jan 14 01:11:46.740323 containerd[2498]: time="2026-01-14T01:11:46.740299660Z" level=info msg="connecting to shim 0b6e46d31b3e60f79ab83f3102f1d0106d5498faab3fe33a4c4639a41d2e2f51" address="unix:///run/containerd/s/6b11e38c010a3e9d2c23f703dfd081dc2d7e4e619a26e8d781b8f9c99d06c7cd" protocol=ttrpc version=3 Jan 14 01:11:46.753995 containerd[2498]: time="2026-01-14T01:11:46.753274173Z" level=info msg="CreateContainer within sandbox \"8ec1c800d5115b64dbc6dd69501b8b0bfc986a2f40f20d4a5ae552df59f76708\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5aaf7ad6ba8a0586847a9de9e07b6c70ddac9d649243acadbfddf0ba7221dfb6\"" Jan 14 01:11:46.753470 systemd[1]: Started cri-containerd-0b6e46d31b3e60f79ab83f3102f1d0106d5498faab3fe33a4c4639a41d2e2f51.scope - libcontainer container 0b6e46d31b3e60f79ab83f3102f1d0106d5498faab3fe33a4c4639a41d2e2f51. Jan 14 01:11:46.755883 containerd[2498]: time="2026-01-14T01:11:46.755853158Z" level=info msg="StartContainer for \"5aaf7ad6ba8a0586847a9de9e07b6c70ddac9d649243acadbfddf0ba7221dfb6\"" Jan 14 01:11:46.757779 containerd[2498]: time="2026-01-14T01:11:46.757748159Z" level=info msg="connecting to shim 5aaf7ad6ba8a0586847a9de9e07b6c70ddac9d649243acadbfddf0ba7221dfb6" address="unix:///run/containerd/s/45a6e9cf896f4e8260fe2bd57101dd2a3cdb7a46efcc7b17e89f2f770005ae99" protocol=ttrpc version=3 Jan 14 01:11:46.770000 audit: BPF prog-id=122 op=LOAD Jan 14 01:11:46.770000 audit: BPF prog-id=123 op=LOAD Jan 14 01:11:46.770000 audit[3792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3680 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366534366433316233653630663739616238336633313032663164 Jan 14 01:11:46.770000 audit: BPF prog-id=123 op=UNLOAD Jan 14 01:11:46.770000 audit[3792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3680 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366534366433316233653630663739616238336633313032663164 Jan 14 01:11:46.770000 audit: BPF prog-id=124 op=LOAD Jan 14 01:11:46.770000 audit[3792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3680 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366534366433316233653630663739616238336633313032663164 Jan 14 01:11:46.770000 audit: BPF prog-id=125 op=LOAD Jan 14 01:11:46.770000 audit[3792]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3680 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366534366433316233653630663739616238336633313032663164 Jan 14 01:11:46.770000 audit: BPF prog-id=125 op=UNLOAD Jan 14 01:11:46.770000 audit[3792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3680 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366534366433316233653630663739616238336633313032663164 Jan 14 01:11:46.770000 audit: BPF prog-id=124 op=UNLOAD Jan 14 01:11:46.770000 audit[3792]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3680 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366534366433316233653630663739616238336633313032663164 Jan 14 01:11:46.770000 audit: BPF prog-id=126 op=LOAD Jan 14 01:11:46.770000 audit[3792]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3680 pid=3792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062366534366433316233653630663739616238336633313032663164 Jan 14 01:11:46.779282 containerd[2498]: time="2026-01-14T01:11:46.779256093Z" level=info msg="Container e63922a4e7a1a30b172eff3125cb00a3d192272e84f340d5e74f8c8fbb9fd0bb: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:46.783265 systemd[1]: Started cri-containerd-5aaf7ad6ba8a0586847a9de9e07b6c70ddac9d649243acadbfddf0ba7221dfb6.scope - libcontainer container 5aaf7ad6ba8a0586847a9de9e07b6c70ddac9d649243acadbfddf0ba7221dfb6. Jan 14 01:11:46.796000 audit: BPF prog-id=127 op=LOAD Jan 14 01:11:46.799000 audit: BPF prog-id=128 op=LOAD Jan 14 01:11:46.799000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3668 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616637616436626138613035383638343761396465396530376236 Jan 14 01:11:46.799000 audit: BPF prog-id=128 op=UNLOAD Jan 14 01:11:46.799000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3668 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616637616436626138613035383638343761396465396530376236 Jan 14 01:11:46.799000 audit: BPF prog-id=129 op=LOAD Jan 14 01:11:46.799000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3668 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616637616436626138613035383638343761396465396530376236 Jan 14 01:11:46.799000 audit: BPF prog-id=130 op=LOAD Jan 14 01:11:46.799000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3668 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616637616436626138613035383638343761396465396530376236 Jan 14 01:11:46.799000 audit: BPF prog-id=130 op=UNLOAD Jan 14 01:11:46.799000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3668 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616637616436626138613035383638343761396465396530376236 Jan 14 01:11:46.799000 audit: BPF prog-id=129 op=UNLOAD Jan 14 01:11:46.799000 audit[3813]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3668 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616637616436626138613035383638343761396465396530376236 Jan 14 01:11:46.799000 audit: BPF prog-id=131 op=LOAD Jan 14 01:11:46.799000 audit[3813]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3668 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561616637616436626138613035383638343761396465396530376236 Jan 14 01:11:46.803163 containerd[2498]: time="2026-01-14T01:11:46.803097120Z" level=info msg="CreateContainer within sandbox \"0c5af01a6e4e8bc75cdd484dcfbf0352163f3e224df5b51264265478ae93bf96\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e63922a4e7a1a30b172eff3125cb00a3d192272e84f340d5e74f8c8fbb9fd0bb\"" Jan 14 01:11:46.803640 containerd[2498]: time="2026-01-14T01:11:46.803541354Z" level=info msg="StartContainer for \"e63922a4e7a1a30b172eff3125cb00a3d192272e84f340d5e74f8c8fbb9fd0bb\"" Jan 14 01:11:46.804697 containerd[2498]: time="2026-01-14T01:11:46.804666298Z" level=info msg="connecting to shim e63922a4e7a1a30b172eff3125cb00a3d192272e84f340d5e74f8c8fbb9fd0bb" address="unix:///run/containerd/s/5191682bd24a2e68267025dd034d7b1ca84c17527e1a65acbaf906b9b0127857" protocol=ttrpc version=3 Jan 14 01:11:46.813999 kubelet[3620]: E0114 01:11:46.811939 3620 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-4dd79cf71d?timeout=10s\": dial tcp 10.200.4.37:6443: connect: connection refused" interval="1.6s" Jan 14 01:11:46.827315 systemd[1]: Started cri-containerd-e63922a4e7a1a30b172eff3125cb00a3d192272e84f340d5e74f8c8fbb9fd0bb.scope - libcontainer container e63922a4e7a1a30b172eff3125cb00a3d192272e84f340d5e74f8c8fbb9fd0bb. Jan 14 01:11:46.833793 containerd[2498]: time="2026-01-14T01:11:46.833304494Z" level=info msg="StartContainer for \"0b6e46d31b3e60f79ab83f3102f1d0106d5498faab3fe33a4c4639a41d2e2f51\" returns successfully" Jan 14 01:11:46.847000 audit: BPF prog-id=132 op=LOAD Jan 14 01:11:46.847000 audit: BPF prog-id=133 op=LOAD Jan 14 01:11:46.847000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3742 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333932326134653761316133306231373265666633313235636230 Jan 14 01:11:46.849000 audit: BPF prog-id=133 op=UNLOAD Jan 14 01:11:46.849000 audit[3834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333932326134653761316133306231373265666633313235636230 Jan 14 01:11:46.849000 audit: BPF prog-id=134 op=LOAD Jan 14 01:11:46.849000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3742 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333932326134653761316133306231373265666633313235636230 Jan 14 01:11:46.849000 audit: BPF prog-id=135 op=LOAD Jan 14 01:11:46.849000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3742 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333932326134653761316133306231373265666633313235636230 Jan 14 01:11:46.849000 audit: BPF prog-id=135 op=UNLOAD Jan 14 01:11:46.849000 audit[3834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333932326134653761316133306231373265666633313235636230 Jan 14 01:11:46.849000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:11:46.849000 audit[3834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333932326134653761316133306231373265666633313235636230 Jan 14 01:11:46.850000 audit: BPF prog-id=136 op=LOAD Jan 14 01:11:46.850000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3742 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:46.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536333932326134653761316133306231373265666633313235636230 Jan 14 01:11:46.865650 containerd[2498]: time="2026-01-14T01:11:46.865619673Z" level=info msg="StartContainer for \"5aaf7ad6ba8a0586847a9de9e07b6c70ddac9d649243acadbfddf0ba7221dfb6\" returns successfully" Jan 14 01:11:46.909521 containerd[2498]: time="2026-01-14T01:11:46.909473239Z" level=info msg="StartContainer for \"e63922a4e7a1a30b172eff3125cb00a3d192272e84f340d5e74f8c8fbb9fd0bb\" returns successfully" Jan 14 01:11:47.192252 kubelet[3620]: I0114 01:11:47.192201 3620 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:47.559103 kubelet[3620]: E0114 01:11:47.558839 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:47.565398 kubelet[3620]: E0114 01:11:47.565018 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:47.566391 kubelet[3620]: E0114 01:11:47.566372 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:48.570352 kubelet[3620]: E0114 01:11:48.569943 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:48.570352 kubelet[3620]: E0114 01:11:48.570262 3620 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:48.871696 kubelet[3620]: E0114 01:11:48.871652 3620 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4578.0.0-p-4dd79cf71d\" not found" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:48.966927 kubelet[3620]: E0114 01:11:48.966720 3620 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4578.0.0-p-4dd79cf71d.188a73c7bc52031e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578.0.0-p-4dd79cf71d,UID:ci-4578.0.0-p-4dd79cf71d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-4dd79cf71d,},FirstTimestamp:2026-01-14 01:11:45.391518494 +0000 UTC m=+0.553845199,LastTimestamp:2026-01-14 01:11:45.391518494 +0000 UTC m=+0.553845199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-4dd79cf71d,}" Jan 14 01:11:49.014068 kubelet[3620]: I0114 01:11:49.014014 3620 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:49.107881 kubelet[3620]: I0114 01:11:49.107659 3620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:49.124361 kubelet[3620]: E0114 01:11:49.123601 3620 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:49.124361 kubelet[3620]: I0114 01:11:49.124009 3620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:49.126366 kubelet[3620]: E0114 01:11:49.126223 3620 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4578.0.0-p-4dd79cf71d.188a73c7bedf6784 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578.0.0-p-4dd79cf71d,UID:ci-4578.0.0-p-4dd79cf71d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-4dd79cf71d,},FirstTimestamp:2026-01-14 01:11:45.434339204 +0000 UTC m=+0.596665904,LastTimestamp:2026-01-14 01:11:45.434339204 +0000 UTC m=+0.596665904,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-4dd79cf71d,}" Jan 14 01:11:49.129059 kubelet[3620]: E0114 01:11:49.129033 3620 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:49.129874 kubelet[3620]: I0114 01:11:49.129784 3620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:49.133133 kubelet[3620]: E0114 01:11:49.133107 3620 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-4dd79cf71d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:49.387068 kubelet[3620]: I0114 01:11:49.386259 3620 apiserver.go:52] "Watching apiserver" Jan 14 01:11:49.408534 kubelet[3620]: I0114 01:11:49.408506 3620 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:11:50.603341 kubelet[3620]: I0114 01:11:50.603310 3620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:50.618487 kubelet[3620]: I0114 01:11:50.618243 3620 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:53.091066 kubelet[3620]: I0114 01:11:53.091028 3620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:53.121002 kubelet[3620]: I0114 01:11:53.120740 3620 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:53.276370 systemd[1]: Reload requested from client PID 3897 ('systemctl') (unit session-10.scope)... Jan 14 01:11:53.276386 systemd[1]: Reloading... Jan 14 01:11:53.362035 zram_generator::config[3943]: No configuration found. Jan 14 01:11:53.582509 systemd[1]: Reloading finished in 305 ms. Jan 14 01:11:53.611107 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:53.627890 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:11:53.628173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:53.633324 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 14 01:11:53.633385 kernel: audit: type=1131 audit(1768353113.626:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:53.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:53.628234 systemd[1]: kubelet.service: Consumed 916ms CPU time, 131.1M memory peak. Jan 14 01:11:53.633198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:11:53.637070 kernel: audit: type=1334 audit(1768353113.632:430): prog-id=137 op=LOAD Jan 14 01:11:53.632000 audit: BPF prog-id=137 op=LOAD Jan 14 01:11:53.632000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:11:53.640363 kernel: audit: type=1334 audit(1768353113.632:431): prog-id=102 op=UNLOAD Jan 14 01:11:53.640439 kernel: audit: type=1334 audit(1768353113.635:432): prog-id=138 op=LOAD Jan 14 01:11:53.635000 audit: BPF prog-id=138 op=LOAD Jan 14 01:11:53.641984 kernel: audit: type=1334 audit(1768353113.635:433): prog-id=103 op=UNLOAD Jan 14 01:11:53.635000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:11:53.635000 audit: BPF prog-id=139 op=LOAD Jan 14 01:11:53.644943 kernel: audit: type=1334 audit(1768353113.635:434): prog-id=139 op=LOAD Jan 14 01:11:53.645007 kernel: audit: type=1334 audit(1768353113.635:435): prog-id=87 op=UNLOAD Jan 14 01:11:53.635000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:11:53.635000 audit: BPF prog-id=140 op=LOAD Jan 14 01:11:53.647758 kernel: audit: type=1334 audit(1768353113.635:436): prog-id=140 op=LOAD Jan 14 01:11:53.647813 kernel: audit: type=1334 audit(1768353113.635:437): prog-id=141 op=LOAD Jan 14 01:11:53.635000 audit: BPF prog-id=141 op=LOAD Jan 14 01:11:53.635000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:11:53.635000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:11:53.640000 audit: BPF prog-id=142 op=LOAD Jan 14 01:11:53.642000 audit: BPF prog-id=143 op=LOAD Jan 14 01:11:53.648003 kernel: audit: type=1334 audit(1768353113.635:438): prog-id=88 op=UNLOAD Jan 14 01:11:53.642000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:11:53.642000 audit: BPF prog-id=91 op=UNLOAD Jan 14 01:11:53.646000 audit: BPF prog-id=144 op=LOAD Jan 14 01:11:53.646000 audit: BPF prog-id=101 op=UNLOAD Jan 14 01:11:53.646000 audit: BPF prog-id=145 op=LOAD Jan 14 01:11:53.646000 audit: BPF prog-id=95 op=UNLOAD Jan 14 01:11:53.646000 audit: BPF prog-id=146 op=LOAD Jan 14 01:11:53.646000 audit: BPF prog-id=147 op=LOAD Jan 14 01:11:53.646000 audit: BPF prog-id=96 op=UNLOAD Jan 14 01:11:53.646000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:11:53.648000 audit: BPF prog-id=148 op=LOAD Jan 14 01:11:53.655000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:11:53.655000 audit: BPF prog-id=149 op=LOAD Jan 14 01:11:53.655000 audit: BPF prog-id=150 op=LOAD Jan 14 01:11:53.655000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:11:53.655000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:11:53.655000 audit: BPF prog-id=151 op=LOAD Jan 14 01:11:53.655000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:11:53.655000 audit: BPF prog-id=152 op=LOAD Jan 14 01:11:53.655000 audit: BPF prog-id=153 op=LOAD Jan 14 01:11:53.655000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:11:53.655000 audit: BPF prog-id=100 op=UNLOAD Jan 14 01:11:53.656000 audit: BPF prog-id=154 op=LOAD Jan 14 01:11:53.656000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:11:53.656000 audit: BPF prog-id=155 op=LOAD Jan 14 01:11:53.656000 audit: BPF prog-id=156 op=LOAD Jan 14 01:11:53.656000 audit: BPF prog-id=105 op=UNLOAD Jan 14 01:11:53.656000 audit: BPF prog-id=106 op=UNLOAD Jan 14 01:11:54.199442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:11:54.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:11:54.208234 (kubelet)[4014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:11:54.248000 kubelet[4014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:11:54.248217 kubelet[4014]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:11:54.248217 kubelet[4014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:11:54.248217 kubelet[4014]: I0114 01:11:54.248085 4014 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:11:54.252514 kubelet[4014]: I0114 01:11:54.252486 4014 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:11:54.252514 kubelet[4014]: I0114 01:11:54.252506 4014 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:11:54.253002 kubelet[4014]: I0114 01:11:54.252889 4014 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:11:54.254876 kubelet[4014]: I0114 01:11:54.254846 4014 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:11:54.256856 kubelet[4014]: I0114 01:11:54.256837 4014 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:11:54.265147 kubelet[4014]: I0114 01:11:54.265128 4014 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:11:54.267693 kubelet[4014]: I0114 01:11:54.267636 4014 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:11:54.267807 kubelet[4014]: I0114 01:11:54.267787 4014 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:11:54.267930 kubelet[4014]: I0114 01:11:54.267806 4014 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-4dd79cf71d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:11:54.268042 kubelet[4014]: I0114 01:11:54.267937 4014 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:11:54.268042 kubelet[4014]: I0114 01:11:54.267947 4014 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:11:54.268042 kubelet[4014]: I0114 01:11:54.268013 4014 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:54.268155 kubelet[4014]: I0114 01:11:54.268144 4014 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:11:54.268187 kubelet[4014]: I0114 01:11:54.268157 4014 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:11:54.268187 kubelet[4014]: I0114 01:11:54.268179 4014 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:11:54.268239 kubelet[4014]: I0114 01:11:54.268192 4014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:11:54.271135 kubelet[4014]: I0114 01:11:54.270372 4014 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:11:54.271135 kubelet[4014]: I0114 01:11:54.270899 4014 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:11:54.276726 kubelet[4014]: I0114 01:11:54.276715 4014 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:11:54.276826 kubelet[4014]: I0114 01:11:54.276821 4014 server.go:1289] "Started kubelet" Jan 14 01:11:54.281069 kubelet[4014]: I0114 01:11:54.281057 4014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:11:54.293180 kubelet[4014]: I0114 01:11:54.293166 4014 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:11:54.294878 kubelet[4014]: I0114 01:11:54.294849 4014 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:11:54.295374 kubelet[4014]: I0114 01:11:54.295361 4014 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:11:54.296402 kubelet[4014]: I0114 01:11:54.295607 4014 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:11:54.298363 kubelet[4014]: I0114 01:11:54.298341 4014 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:11:54.298860 kubelet[4014]: I0114 01:11:54.295636 4014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:11:54.300269 kubelet[4014]: I0114 01:11:54.299359 4014 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:11:54.300269 kubelet[4014]: I0114 01:11:54.299464 4014 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:11:54.301538 kubelet[4014]: I0114 01:11:54.301519 4014 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:11:54.301622 kubelet[4014]: I0114 01:11:54.301606 4014 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:11:54.302069 kubelet[4014]: E0114 01:11:54.302051 4014 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:11:54.303387 kubelet[4014]: I0114 01:11:54.303361 4014 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:11:54.315947 kubelet[4014]: I0114 01:11:54.315503 4014 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:11:54.316589 kubelet[4014]: I0114 01:11:54.316567 4014 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:11:54.316589 kubelet[4014]: I0114 01:11:54.316588 4014 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:11:54.316684 kubelet[4014]: I0114 01:11:54.316605 4014 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:11:54.316684 kubelet[4014]: I0114 01:11:54.316611 4014 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:11:54.316740 kubelet[4014]: E0114 01:11:54.316644 4014 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:11:54.339924 kubelet[4014]: I0114 01:11:54.339906 4014 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:11:54.340024 kubelet[4014]: I0114 01:11:54.340017 4014 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:11:54.340061 kubelet[4014]: I0114 01:11:54.340058 4014 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:11:54.340172 kubelet[4014]: I0114 01:11:54.340167 4014 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:11:54.340216 kubelet[4014]: I0114 01:11:54.340202 4014 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:11:54.340238 kubelet[4014]: I0114 01:11:54.340235 4014 policy_none.go:49] "None policy: Start" Jan 14 01:11:54.340263 kubelet[4014]: I0114 01:11:54.340260 4014 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:11:54.340289 kubelet[4014]: I0114 01:11:54.340286 4014 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:11:54.340372 kubelet[4014]: I0114 01:11:54.340368 4014 state_mem.go:75] "Updated machine memory state" Jan 14 01:11:54.343351 kubelet[4014]: E0114 01:11:54.343335 4014 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:11:54.343478 kubelet[4014]: I0114 01:11:54.343467 4014 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:11:54.343522 kubelet[4014]: I0114 01:11:54.343482 4014 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:11:54.343791 kubelet[4014]: I0114 01:11:54.343783 4014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:11:54.346813 kubelet[4014]: E0114 01:11:54.346796 4014 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:11:54.417317 kubelet[4014]: I0114 01:11:54.417266 4014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.417582 kubelet[4014]: I0114 01:11:54.417265 4014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.417809 kubelet[4014]: I0114 01:11:54.417743 4014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.448620 kubelet[4014]: I0114 01:11:54.448601 4014 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501194 kubelet[4014]: I0114 01:11:54.501060 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501194 kubelet[4014]: I0114 01:11:54.501100 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e522e5ad6dd5675079c5f93edc6caa2b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" (UID: \"e522e5ad6dd5675079c5f93edc6caa2b\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501194 kubelet[4014]: I0114 01:11:54.501121 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501194 kubelet[4014]: I0114 01:11:54.501141 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501194 kubelet[4014]: I0114 01:11:54.501159 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6707ed61df7304bbe62fa06ed8dd20ae-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-4dd79cf71d\" (UID: \"6707ed61df7304bbe62fa06ed8dd20ae\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501428 kubelet[4014]: I0114 01:11:54.501174 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e522e5ad6dd5675079c5f93edc6caa2b-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" (UID: \"e522e5ad6dd5675079c5f93edc6caa2b\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501428 kubelet[4014]: I0114 01:11:54.501188 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e522e5ad6dd5675079c5f93edc6caa2b-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" (UID: \"e522e5ad6dd5675079c5f93edc6caa2b\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501428 kubelet[4014]: I0114 01:11:54.501205 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.501428 kubelet[4014]: I0114 01:11:54.501223 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86b39a24d146fe18d81a149ddb6c341e-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-4dd79cf71d\" (UID: \"86b39a24d146fe18d81a149ddb6c341e\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.521997 kubelet[4014]: I0114 01:11:54.521439 4014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:54.521997 kubelet[4014]: E0114 01:11:54.521493 4014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-4dd79cf71d\" already exists" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.521997 kubelet[4014]: I0114 01:11:54.521450 4014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:54.521997 kubelet[4014]: I0114 01:11:54.521616 4014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:54.521997 kubelet[4014]: E0114 01:11:54.521645 4014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" already exists" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.523779 kubelet[4014]: I0114 01:11:54.523762 4014 kubelet_node_status.go:124] "Node was previously registered" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:54.523927 kubelet[4014]: I0114 01:11:54.523919 4014 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:55.269357 kubelet[4014]: I0114 01:11:55.269326 4014 apiserver.go:52] "Watching apiserver" Jan 14 01:11:55.296930 kubelet[4014]: I0114 01:11:55.296769 4014 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:11:55.332002 kubelet[4014]: I0114 01:11:55.331787 4014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:55.332156 kubelet[4014]: I0114 01:11:55.332140 4014 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:55.409297 kubelet[4014]: I0114 01:11:55.409265 4014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:55.409407 kubelet[4014]: E0114 01:11:55.409394 4014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-4dd79cf71d\" already exists" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:55.416353 kubelet[4014]: I0114 01:11:55.415894 4014 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:11:55.416353 kubelet[4014]: E0114 01:11:55.415933 4014 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-4dd79cf71d\" already exists" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" Jan 14 01:11:55.457746 kubelet[4014]: I0114 01:11:55.457575 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4578.0.0-p-4dd79cf71d" podStartSLOduration=2.457559556 podStartE2EDuration="2.457559556s" podCreationTimestamp="2026-01-14 01:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:55.414226105 +0000 UTC m=+1.202850772" watchObservedRunningTime="2026-01-14 01:11:55.457559556 +0000 UTC m=+1.246184219" Jan 14 01:11:55.473702 kubelet[4014]: I0114 01:11:55.473661 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4578.0.0-p-4dd79cf71d" podStartSLOduration=5.473646796 podStartE2EDuration="5.473646796s" podCreationTimestamp="2026-01-14 01:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:55.457820337 +0000 UTC m=+1.246445000" watchObservedRunningTime="2026-01-14 01:11:55.473646796 +0000 UTC m=+1.262271467" Jan 14 01:11:55.504642 kubelet[4014]: I0114 01:11:55.504602 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-4dd79cf71d" podStartSLOduration=1.504588344 podStartE2EDuration="1.504588344s" podCreationTimestamp="2026-01-14 01:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:55.475399156 +0000 UTC m=+1.264023822" watchObservedRunningTime="2026-01-14 01:11:55.504588344 +0000 UTC m=+1.293213005" Jan 14 01:11:57.329181 kubelet[4014]: I0114 01:11:57.328958 4014 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:11:57.329549 containerd[2498]: time="2026-01-14T01:11:57.329378216Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:11:57.329744 kubelet[4014]: I0114 01:11:57.329586 4014 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:11:58.631121 systemd[1]: Created slice kubepods-besteffort-podc61bd41e_8503_41fd_b52d_93e0cc4ab2cf.slice - libcontainer container kubepods-besteffort-podc61bd41e_8503_41fd_b52d_93e0cc4ab2cf.slice. Jan 14 01:11:58.725079 kubelet[4014]: I0114 01:11:58.724949 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c61bd41e-8503-41fd-b52d-93e0cc4ab2cf-xtables-lock\") pod \"kube-proxy-5ksf4\" (UID: \"c61bd41e-8503-41fd-b52d-93e0cc4ab2cf\") " pod="kube-system/kube-proxy-5ksf4" Jan 14 01:11:58.725079 kubelet[4014]: I0114 01:11:58.725002 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c61bd41e-8503-41fd-b52d-93e0cc4ab2cf-lib-modules\") pod \"kube-proxy-5ksf4\" (UID: \"c61bd41e-8503-41fd-b52d-93e0cc4ab2cf\") " pod="kube-system/kube-proxy-5ksf4" Jan 14 01:11:58.725079 kubelet[4014]: I0114 01:11:58.725020 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c61bd41e-8503-41fd-b52d-93e0cc4ab2cf-kube-proxy\") pod \"kube-proxy-5ksf4\" (UID: \"c61bd41e-8503-41fd-b52d-93e0cc4ab2cf\") " pod="kube-system/kube-proxy-5ksf4" Jan 14 01:11:58.725079 kubelet[4014]: I0114 01:11:58.725036 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srnff\" (UniqueName: \"kubernetes.io/projected/c61bd41e-8503-41fd-b52d-93e0cc4ab2cf-kube-api-access-srnff\") pod \"kube-proxy-5ksf4\" (UID: \"c61bd41e-8503-41fd-b52d-93e0cc4ab2cf\") " pod="kube-system/kube-proxy-5ksf4" Jan 14 01:11:58.831796 systemd[1]: Created slice kubepods-besteffort-pod9f16eedb_a0df_47ab_a688_ca2743c18638.slice - libcontainer container kubepods-besteffort-pod9f16eedb_a0df_47ab_a688_ca2743c18638.slice. Jan 14 01:11:58.925893 kubelet[4014]: I0114 01:11:58.925795 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f16eedb-a0df-47ab-a688-ca2743c18638-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p82rs\" (UID: \"9f16eedb-a0df-47ab-a688-ca2743c18638\") " pod="tigera-operator/tigera-operator-7dcd859c48-p82rs" Jan 14 01:11:58.925893 kubelet[4014]: I0114 01:11:58.925828 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7zzb\" (UniqueName: \"kubernetes.io/projected/9f16eedb-a0df-47ab-a688-ca2743c18638-kube-api-access-p7zzb\") pod \"tigera-operator-7dcd859c48-p82rs\" (UID: \"9f16eedb-a0df-47ab-a688-ca2743c18638\") " pod="tigera-operator/tigera-operator-7dcd859c48-p82rs" Jan 14 01:11:58.938383 containerd[2498]: time="2026-01-14T01:11:58.938349599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ksf4,Uid:c61bd41e-8503-41fd-b52d-93e0cc4ab2cf,Namespace:kube-system,Attempt:0,}" Jan 14 01:11:58.990551 containerd[2498]: time="2026-01-14T01:11:58.990509938Z" level=info msg="connecting to shim 2fa527f004a6d7bda1213bbb6b0277f350df6253c7ef927e7cd933925784284e" address="unix:///run/containerd/s/a756519ae92c861f5eed1986ea1be0bfb59fbe715cda683019201e20328842fb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:59.017135 systemd[1]: Started cri-containerd-2fa527f004a6d7bda1213bbb6b0277f350df6253c7ef927e7cd933925784284e.scope - libcontainer container 2fa527f004a6d7bda1213bbb6b0277f350df6253c7ef927e7cd933925784284e. Jan 14 01:11:59.028213 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:11:59.028291 kernel: audit: type=1334 audit(1768353119.024:471): prog-id=157 op=LOAD Jan 14 01:11:59.024000 audit: BPF prog-id=157 op=LOAD Jan 14 01:11:59.029642 kernel: audit: type=1334 audit(1768353119.027:472): prog-id=158 op=LOAD Jan 14 01:11:59.027000 audit: BPF prog-id=158 op=LOAD Jan 14 01:11:59.033264 kernel: audit: type=1300 audit(1768353119.027:472): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.037057 kernel: audit: type=1327 audit(1768353119.027:472): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.027000 audit: BPF prog-id=158 op=UNLOAD Jan 14 01:11:59.043892 kernel: audit: type=1334 audit(1768353119.027:473): prog-id=158 op=UNLOAD Jan 14 01:11:59.043950 kernel: audit: type=1300 audit(1768353119.027:473): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.048997 kernel: audit: type=1327 audit(1768353119.027:473): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.027000 audit: BPF prog-id=159 op=LOAD Jan 14 01:11:59.057043 kernel: audit: type=1334 audit(1768353119.027:474): prog-id=159 op=LOAD Jan 14 01:11:59.057109 kernel: audit: type=1300 audit(1768353119.027:474): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.027000 audit: BPF prog-id=160 op=LOAD Jan 14 01:11:59.027000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.027000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:11:59.027000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.027000 audit: BPF prog-id=159 op=UNLOAD Jan 14 01:11:59.027000 audit[4080]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.027000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.028000 audit: BPF prog-id=161 op=LOAD Jan 14 01:11:59.028000 audit[4080]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4068 pid=4080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.068007 kernel: audit: type=1327 audit(1768353119.027:474): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266613532376630303461366437626461313231336262623662303237 Jan 14 01:11:59.073738 containerd[2498]: time="2026-01-14T01:11:59.073702243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5ksf4,Uid:c61bd41e-8503-41fd-b52d-93e0cc4ab2cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fa527f004a6d7bda1213bbb6b0277f350df6253c7ef927e7cd933925784284e\"" Jan 14 01:11:59.105744 containerd[2498]: time="2026-01-14T01:11:59.105711742Z" level=info msg="CreateContainer within sandbox \"2fa527f004a6d7bda1213bbb6b0277f350df6253c7ef927e7cd933925784284e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:11:59.125457 containerd[2498]: time="2026-01-14T01:11:59.124479414Z" level=info msg="Container c3118107eae57635f519bef45b5f357bca60bd15d221302f3851bb8f5222b855: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:11:59.134547 containerd[2498]: time="2026-01-14T01:11:59.134517635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p82rs,Uid:9f16eedb-a0df-47ab-a688-ca2743c18638,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:11:59.141426 containerd[2498]: time="2026-01-14T01:11:59.141337050Z" level=info msg="CreateContainer within sandbox \"2fa527f004a6d7bda1213bbb6b0277f350df6253c7ef927e7cd933925784284e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3118107eae57635f519bef45b5f357bca60bd15d221302f3851bb8f5222b855\"" Jan 14 01:11:59.142140 containerd[2498]: time="2026-01-14T01:11:59.142117581Z" level=info msg="StartContainer for \"c3118107eae57635f519bef45b5f357bca60bd15d221302f3851bb8f5222b855\"" Jan 14 01:11:59.144289 containerd[2498]: time="2026-01-14T01:11:59.144258727Z" level=info msg="connecting to shim c3118107eae57635f519bef45b5f357bca60bd15d221302f3851bb8f5222b855" address="unix:///run/containerd/s/a756519ae92c861f5eed1986ea1be0bfb59fbe715cda683019201e20328842fb" protocol=ttrpc version=3 Jan 14 01:11:59.166228 systemd[1]: Started cri-containerd-c3118107eae57635f519bef45b5f357bca60bd15d221302f3851bb8f5222b855.scope - libcontainer container c3118107eae57635f519bef45b5f357bca60bd15d221302f3851bb8f5222b855. Jan 14 01:11:59.179890 containerd[2498]: time="2026-01-14T01:11:59.179661494Z" level=info msg="connecting to shim 1ff310e7138800bee224bfee6e5ecb1e9552654667a3834212ae6089b6bfa0bc" address="unix:///run/containerd/s/5e69d39bddaaabbe97b74ffb28ad1f03596593b28a3f0a3e986ba975822371cf" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:11:59.200135 systemd[1]: Started cri-containerd-1ff310e7138800bee224bfee6e5ecb1e9552654667a3834212ae6089b6bfa0bc.scope - libcontainer container 1ff310e7138800bee224bfee6e5ecb1e9552654667a3834212ae6089b6bfa0bc. Jan 14 01:11:59.201000 audit: BPF prog-id=162 op=LOAD Jan 14 01:11:59.201000 audit[4107]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4068 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333313138313037656165353736333566353139626566343562356633 Jan 14 01:11:59.202000 audit: BPF prog-id=163 op=LOAD Jan 14 01:11:59.202000 audit[4107]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4068 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333313138313037656165353736333566353139626566343562356633 Jan 14 01:11:59.202000 audit: BPF prog-id=163 op=UNLOAD Jan 14 01:11:59.202000 audit[4107]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333313138313037656165353736333566353139626566343562356633 Jan 14 01:11:59.202000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:11:59.202000 audit[4107]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4068 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333313138313037656165353736333566353139626566343562356633 Jan 14 01:11:59.202000 audit: BPF prog-id=164 op=LOAD Jan 14 01:11:59.202000 audit[4107]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4068 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333313138313037656165353736333566353139626566343562356633 Jan 14 01:11:59.211000 audit: BPF prog-id=165 op=LOAD Jan 14 01:11:59.211000 audit: BPF prog-id=166 op=LOAD Jan 14 01:11:59.211000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166663331306537313338383030626565323234626665653665356563 Jan 14 01:11:59.212000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:11:59.212000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166663331306537313338383030626565323234626665653665356563 Jan 14 01:11:59.212000 audit: BPF prog-id=167 op=LOAD Jan 14 01:11:59.212000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166663331306537313338383030626565323234626665653665356563 Jan 14 01:11:59.212000 audit: BPF prog-id=168 op=LOAD Jan 14 01:11:59.212000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166663331306537313338383030626565323234626665653665356563 Jan 14 01:11:59.212000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:11:59.212000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166663331306537313338383030626565323234626665653665356563 Jan 14 01:11:59.212000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:11:59.212000 audit[4147]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166663331306537313338383030626565323234626665653665356563 Jan 14 01:11:59.212000 audit: BPF prog-id=169 op=LOAD Jan 14 01:11:59.212000 audit[4147]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166663331306537313338383030626565323234626665653665356563 Jan 14 01:11:59.235701 containerd[2498]: time="2026-01-14T01:11:59.235552760Z" level=info msg="StartContainer for \"c3118107eae57635f519bef45b5f357bca60bd15d221302f3851bb8f5222b855\" returns successfully" Jan 14 01:11:59.252803 containerd[2498]: time="2026-01-14T01:11:59.252772358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p82rs,Uid:9f16eedb-a0df-47ab-a688-ca2743c18638,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1ff310e7138800bee224bfee6e5ecb1e9552654667a3834212ae6089b6bfa0bc\"" Jan 14 01:11:59.254715 containerd[2498]: time="2026-01-14T01:11:59.254694151Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:11:59.331000 audit[4215]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=4215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.331000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe45951ef0 a2=0 a3=7ffe45951edc items=0 ppid=4120 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.331000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:11:59.332000 audit[4216]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=4216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.332000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7f2a7c60 a2=0 a3=7fff7f2a7c4c items=0 ppid=4120 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:11:59.333000 audit[4219]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=4219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.333000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe00d93b00 a2=0 a3=7ffe00d93aec items=0 ppid=4120 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:11:59.333000 audit[4218]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.333000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca663ee40 a2=0 a3=7ffca663ee2c items=0 ppid=4120 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:11:59.335000 audit[4220]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=4220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.335000 audit[4220]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffedd388c0 a2=0 a3=7fffedd388ac items=0 ppid=4120 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:11:59.336000 audit[4221]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.336000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcde3bef00 a2=0 a3=7ffcde3beeec items=0 ppid=4120 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:11:59.439000 audit[4224]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.439000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffec8330900 a2=0 a3=7ffec83308ec items=0 ppid=4120 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:11:59.442000 audit[4226]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.442000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe9f737b20 a2=0 a3=7ffe9f737b0c items=0 ppid=4120 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:11:59.445000 audit[4229]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.445000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd14676fb0 a2=0 a3=7ffd14676f9c items=0 ppid=4120 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:11:59.446000 audit[4230]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.446000 audit[4230]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcee512470 a2=0 a3=7ffcee51245c items=0 ppid=4120 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.446000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:11:59.448000 audit[4232]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.448000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd79c10a00 a2=0 a3=7ffd79c109ec items=0 ppid=4120 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.448000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:11:59.449000 audit[4233]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.449000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe067453c0 a2=0 a3=7ffe067453ac items=0 ppid=4120 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.449000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:11:59.452000 audit[4235]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.452000 audit[4235]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe93ebcf90 a2=0 a3=7ffe93ebcf7c items=0 ppid=4120 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:11:59.455000 audit[4238]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.455000 audit[4238]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdf6b807c0 a2=0 a3=7ffdf6b807ac items=0 ppid=4120 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:11:59.456000 audit[4239]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.456000 audit[4239]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9b4f4050 a2=0 a3=7ffe9b4f403c items=0 ppid=4120 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:11:59.458000 audit[4241]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.458000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc0b0565c0 a2=0 a3=7ffc0b0565ac items=0 ppid=4120 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:11:59.460000 audit[4242]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.460000 audit[4242]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8752b000 a2=0 a3=7fff8752afec items=0 ppid=4120 pid=4242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:11:59.462000 audit[4244]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.462000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc43788970 a2=0 a3=7ffc4378895c items=0 ppid=4120 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:11:59.466000 audit[4247]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.466000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4c766e20 a2=0 a3=7ffc4c766e0c items=0 ppid=4120 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:11:59.470000 audit[4250]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.470000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd7a016c0 a2=0 a3=7ffdd7a016ac items=0 ppid=4120 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:11:59.471000 audit[4251]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.471000 audit[4251]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcea66eb10 a2=0 a3=7ffcea66eafc items=0 ppid=4120 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:11:59.474000 audit[4253]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.474000 audit[4253]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe7fa4d6b0 a2=0 a3=7ffe7fa4d69c items=0 ppid=4120 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.474000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:59.477000 audit[4256]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.477000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfb420880 a2=0 a3=7ffdfb42086c items=0 ppid=4120 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:59.478000 audit[4257]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.478000 audit[4257]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5d075100 a2=0 a3=7ffd5d0750ec items=0 ppid=4120 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.478000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:11:59.481000 audit[4259]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:11:59.481000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcdb00ef30 a2=0 a3=7ffcdb00ef1c items=0 ppid=4120 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:11:59.585000 audit[4265]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:59.585000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb1891770 a2=0 a3=7ffeb189175c items=0 ppid=4120 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:59.591000 audit[4265]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:11:59.591000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffeb1891770 a2=0 a3=7ffeb189175c items=0 ppid=4120 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:59.593000 audit[4270]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.593000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff918c3560 a2=0 a3=7fff918c354c items=0 ppid=4120 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:11:59.595000 audit[4272]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.595000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff001578a0 a2=0 a3=7fff0015788c items=0 ppid=4120 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.595000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:11:59.599000 audit[4275]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.599000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff96ad3de0 a2=0 a3=7fff96ad3dcc items=0 ppid=4120 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:11:59.600000 audit[4276]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.600000 audit[4276]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8c59cb70 a2=0 a3=7ffd8c59cb5c items=0 ppid=4120 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:11:59.602000 audit[4278]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.602000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff72027eb0 a2=0 a3=7fff72027e9c items=0 ppid=4120 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:11:59.603000 audit[4279]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.603000 audit[4279]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbb7cd2f0 a2=0 a3=7ffcbb7cd2dc items=0 ppid=4120 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:11:59.606000 audit[4281]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.606000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb4031de0 a2=0 a3=7ffeb4031dcc items=0 ppid=4120 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:11:59.609000 audit[4284]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.609000 audit[4284]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc023019a0 a2=0 a3=7ffc0230198c items=0 ppid=4120 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:11:59.610000 audit[4285]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.610000 audit[4285]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0d062d10 a2=0 a3=7fff0d062cfc items=0 ppid=4120 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:11:59.612000 audit[4287]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.612000 audit[4287]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea5624130 a2=0 a3=7ffea562411c items=0 ppid=4120 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.612000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:11:59.613000 audit[4288]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.613000 audit[4288]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1b1c8c60 a2=0 a3=7ffd1b1c8c4c items=0 ppid=4120 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:11:59.616000 audit[4290]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.616000 audit[4290]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdda2d8d40 a2=0 a3=7ffdda2d8d2c items=0 ppid=4120 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:11:59.619000 audit[4293]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.619000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffebc09d0b0 a2=0 a3=7ffebc09d09c items=0 ppid=4120 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:11:59.622000 audit[4296]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.622000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeef6a5450 a2=0 a3=7ffeef6a543c items=0 ppid=4120 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:11:59.623000 audit[4297]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.623000 audit[4297]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc1394800 a2=0 a3=7fffc13947ec items=0 ppid=4120 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:11:59.625000 audit[4299]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4299 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.625000 audit[4299]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc516b8af0 a2=0 a3=7ffc516b8adc items=0 ppid=4120 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.625000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:59.628000 audit[4302]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.628000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc9065f270 a2=0 a3=7ffc9065f25c items=0 ppid=4120 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.628000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:11:59.630000 audit[4303]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.630000 audit[4303]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf095ef40 a2=0 a3=7ffdf095ef2c items=0 ppid=4120 pid=4303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:11:59.632000 audit[4305]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.632000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffff93c930 a2=0 a3=7fffff93c91c items=0 ppid=4120 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.632000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:11:59.633000 audit[4306]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.633000 audit[4306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff03be5b60 a2=0 a3=7fff03be5b4c items=0 ppid=4120 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:11:59.635000 audit[4308]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.635000 audit[4308]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc24617a90 a2=0 a3=7ffc24617a7c items=0 ppid=4120 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.635000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:59.639000 audit[4311]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:11:59.639000 audit[4311]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc5b46eb10 a2=0 a3=7ffc5b46eafc items=0 ppid=4120 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:11:59.643000 audit[4313]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:11:59.643000 audit[4313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffc8edc330 a2=0 a3=7fffc8edc31c items=0 ppid=4120 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.643000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:59.643000 audit[4313]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:11:59.643000 audit[4313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffc8edc330 a2=0 a3=7fffc8edc31c items=0 ppid=4120 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:11:59.643000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:11:59.666743 kubelet[4014]: I0114 01:11:59.666522 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5ksf4" podStartSLOduration=1.6665050799999999 podStartE2EDuration="1.66650508s" podCreationTimestamp="2026-01-14 01:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:11:59.361164871 +0000 UTC m=+5.149789537" watchObservedRunningTime="2026-01-14 01:11:59.66650508 +0000 UTC m=+5.455129753" Jan 14 01:11:59.877189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428494506.mount: Deactivated successfully. Jan 14 01:12:00.993954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876712349.mount: Deactivated successfully. Jan 14 01:12:01.556838 containerd[2498]: time="2026-01-14T01:12:01.556795124Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:01.559675 containerd[2498]: time="2026-01-14T01:12:01.559587223Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 01:12:01.563298 containerd[2498]: time="2026-01-14T01:12:01.563248688Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:01.568318 containerd[2498]: time="2026-01-14T01:12:01.568256674Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:01.568748 containerd[2498]: time="2026-01-14T01:12:01.568643420Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.313918934s" Jan 14 01:12:01.568748 containerd[2498]: time="2026-01-14T01:12:01.568674682Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:12:01.575248 containerd[2498]: time="2026-01-14T01:12:01.575224278Z" level=info msg="CreateContainer within sandbox \"1ff310e7138800bee224bfee6e5ecb1e9552654667a3834212ae6089b6bfa0bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:12:01.594997 containerd[2498]: time="2026-01-14T01:12:01.593411467Z" level=info msg="Container da0d169fd66f8def470bd7f6a994d3fa47000960d829134f0f4ae2acc9062997: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:01.606855 containerd[2498]: time="2026-01-14T01:12:01.606828919Z" level=info msg="CreateContainer within sandbox \"1ff310e7138800bee224bfee6e5ecb1e9552654667a3834212ae6089b6bfa0bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"da0d169fd66f8def470bd7f6a994d3fa47000960d829134f0f4ae2acc9062997\"" Jan 14 01:12:01.607421 containerd[2498]: time="2026-01-14T01:12:01.607350541Z" level=info msg="StartContainer for \"da0d169fd66f8def470bd7f6a994d3fa47000960d829134f0f4ae2acc9062997\"" Jan 14 01:12:01.608528 containerd[2498]: time="2026-01-14T01:12:01.608456439Z" level=info msg="connecting to shim da0d169fd66f8def470bd7f6a994d3fa47000960d829134f0f4ae2acc9062997" address="unix:///run/containerd/s/5e69d39bddaaabbe97b74ffb28ad1f03596593b28a3f0a3e986ba975822371cf" protocol=ttrpc version=3 Jan 14 01:12:01.630194 systemd[1]: Started cri-containerd-da0d169fd66f8def470bd7f6a994d3fa47000960d829134f0f4ae2acc9062997.scope - libcontainer container da0d169fd66f8def470bd7f6a994d3fa47000960d829134f0f4ae2acc9062997. Jan 14 01:12:01.639000 audit: BPF prog-id=170 op=LOAD Jan 14 01:12:01.639000 audit: BPF prog-id=171 op=LOAD Jan 14 01:12:01.639000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4136 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461306431363966643636663864656634373062643766366139393464 Jan 14 01:12:01.640000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:12:01.640000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461306431363966643636663864656634373062643766366139393464 Jan 14 01:12:01.640000 audit: BPF prog-id=172 op=LOAD Jan 14 01:12:01.640000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4136 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461306431363966643636663864656634373062643766366139393464 Jan 14 01:12:01.640000 audit: BPF prog-id=173 op=LOAD Jan 14 01:12:01.640000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4136 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461306431363966643636663864656634373062643766366139393464 Jan 14 01:12:01.640000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:12:01.640000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461306431363966643636663864656634373062643766366139393464 Jan 14 01:12:01.640000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:12:01.640000 audit[4322]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461306431363966643636663864656634373062643766366139393464 Jan 14 01:12:01.640000 audit: BPF prog-id=174 op=LOAD Jan 14 01:12:01.640000 audit[4322]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4136 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:01.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461306431363966643636663864656634373062643766366139393464 Jan 14 01:12:01.661334 containerd[2498]: time="2026-01-14T01:12:01.661303232Z" level=info msg="StartContainer for \"da0d169fd66f8def470bd7f6a994d3fa47000960d829134f0f4ae2acc9062997\" returns successfully" Jan 14 01:12:02.360452 kubelet[4014]: I0114 01:12:02.360399 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p82rs" podStartSLOduration=2.044892185 podStartE2EDuration="4.360384471s" podCreationTimestamp="2026-01-14 01:11:58 +0000 UTC" firstStartedPulling="2026-01-14 01:11:59.253949802 +0000 UTC m=+5.042574466" lastFinishedPulling="2026-01-14 01:12:01.569442094 +0000 UTC m=+7.358066752" observedRunningTime="2026-01-14 01:12:02.360232537 +0000 UTC m=+8.148857203" watchObservedRunningTime="2026-01-14 01:12:02.360384471 +0000 UTC m=+8.149009142" Jan 14 01:12:08.233253 sudo[2971]: pam_unix(sudo:session): session closed for user root Jan 14 01:12:08.240578 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:12:08.240671 kernel: audit: type=1106 audit(1768353128.233:551): pid=2971 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:12:08.233000 audit[2971]: USER_END pid=2971 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:12:08.233000 audit[2971]: CRED_DISP pid=2971 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:12:08.248981 kernel: audit: type=1104 audit(1768353128.233:552): pid=2971 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:12:08.335063 sshd[2970]: Connection closed by 10.200.16.10 port 50214 Jan 14 01:12:08.336341 sshd-session[2965]: pam_unix(sshd:session): session closed for user core Jan 14 01:12:08.337000 audit[2965]: USER_END pid=2965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:12:08.347445 kernel: audit: type=1106 audit(1768353128.337:553): pid=2965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:12:08.347567 systemd[1]: sshd@6-10.200.4.37:22-10.200.16.10:50214.service: Deactivated successfully. Jan 14 01:12:08.337000 audit[2965]: CRED_DISP pid=2965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:12:08.353987 kernel: audit: type=1104 audit(1768353128.337:554): pid=2965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:12:08.354309 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:12:08.354840 systemd[1]: session-10.scope: Consumed 3.871s CPU time, 230.8M memory peak. Jan 14 01:12:08.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.37:22-10.200.16.10:50214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:08.361281 kernel: audit: type=1131 audit(1768353128.346:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.37:22-10.200.16.10:50214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:12:08.361545 systemd-logind[2463]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:12:08.363265 systemd-logind[2463]: Removed session 10. Jan 14 01:12:09.793000 audit[4403]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:09.800035 kernel: audit: type=1325 audit(1768353129.793:556): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:09.793000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc1ce5fb0 a2=0 a3=7ffcc1ce5f9c items=0 ppid=4120 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:09.815174 kernel: audit: type=1300 audit(1768353129.793:556): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcc1ce5fb0 a2=0 a3=7ffcc1ce5f9c items=0 ppid=4120 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:09.815272 kernel: audit: type=1327 audit(1768353129.793:556): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:09.793000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:09.821215 kernel: audit: type=1325 audit(1768353129.809:557): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:09.809000 audit[4403]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:09.809000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc1ce5fb0 a2=0 a3=0 items=0 ppid=4120 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:09.830012 kernel: audit: type=1300 audit(1768353129.809:557): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc1ce5fb0 a2=0 a3=0 items=0 ppid=4120 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:09.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:09.838000 audit[4405]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:09.838000 audit[4405]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd72d0e90 a2=0 a3=7fffd72d0e7c items=0 ppid=4120 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:09.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:09.843000 audit[4405]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:09.843000 audit[4405]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd72d0e90 a2=0 a3=0 items=0 ppid=4120 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:09.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:12.214000 audit[4407]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:12.214000 audit[4407]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffec5e653d0 a2=0 a3=7ffec5e653bc items=0 ppid=4120 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:12.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:12.216000 audit[4407]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:12.216000 audit[4407]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffec5e653d0 a2=0 a3=0 items=0 ppid=4120 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:12.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:12.302000 audit[4409]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:12.302000 audit[4409]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffb162bad0 a2=0 a3=7fffb162babc items=0 ppid=4120 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:12.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:12.310000 audit[4409]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:12.310000 audit[4409]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb162bad0 a2=0 a3=0 items=0 ppid=4120 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:12.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:13.444138 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 01:12:13.444270 kernel: audit: type=1325 audit(1768353133.435:564): table=filter:116 family=2 entries=19 op=nft_register_rule pid=4411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:13.435000 audit[4411]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:13.435000 audit[4411]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd8f8c8030 a2=0 a3=7ffd8f8c801c items=0 ppid=4120 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:13.454551 kernel: audit: type=1300 audit(1768353133.435:564): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd8f8c8030 a2=0 a3=7ffd8f8c801c items=0 ppid=4120 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:13.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:13.465272 kernel: audit: type=1327 audit(1768353133.435:564): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:13.465352 kernel: audit: type=1325 audit(1768353133.453:565): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:13.453000 audit[4411]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4411 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:13.475245 kernel: audit: type=1300 audit(1768353133.453:565): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8f8c8030 a2=0 a3=0 items=0 ppid=4120 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:13.453000 audit[4411]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8f8c8030 a2=0 a3=0 items=0 ppid=4120 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:13.480770 kernel: audit: type=1327 audit(1768353133.453:565): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:13.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:14.275071 kernel: audit: type=1325 audit(1768353134.269:566): table=filter:118 family=2 entries=21 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:14.275181 kernel: audit: type=1300 audit(1768353134.269:566): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff6735e3b0 a2=0 a3=7fff6735e39c items=0 ppid=4120 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.269000 audit[4413]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:14.269000 audit[4413]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff6735e3b0 a2=0 a3=7fff6735e39c items=0 ppid=4120 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:14.281764 kernel: audit: type=1327 audit(1768353134.269:566): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:14.282000 audit[4413]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:14.282000 audit[4413]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6735e3b0 a2=0 a3=0 items=0 ppid=4120 pid=4413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:14.287000 kernel: audit: type=1325 audit(1768353134.282:567): table=nat:119 family=2 entries=12 op=nft_register_rule pid=4413 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:14.530797 systemd[1]: Created slice kubepods-besteffort-pod6735e842_6df1_42a0_91f5_f32f1dc44c19.slice - libcontainer container kubepods-besteffort-pod6735e842_6df1_42a0_91f5_f32f1dc44c19.slice. Jan 14 01:12:14.618931 kubelet[4014]: I0114 01:12:14.618807 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6735e842-6df1-42a0-91f5-f32f1dc44c19-tigera-ca-bundle\") pod \"calico-typha-7847884cd9-kk6nq\" (UID: \"6735e842-6df1-42a0-91f5-f32f1dc44c19\") " pod="calico-system/calico-typha-7847884cd9-kk6nq" Jan 14 01:12:14.618931 kubelet[4014]: I0114 01:12:14.618850 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6735e842-6df1-42a0-91f5-f32f1dc44c19-typha-certs\") pod \"calico-typha-7847884cd9-kk6nq\" (UID: \"6735e842-6df1-42a0-91f5-f32f1dc44c19\") " pod="calico-system/calico-typha-7847884cd9-kk6nq" Jan 14 01:12:14.618931 kubelet[4014]: I0114 01:12:14.618875 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8q2\" (UniqueName: \"kubernetes.io/projected/6735e842-6df1-42a0-91f5-f32f1dc44c19-kube-api-access-9r8q2\") pod \"calico-typha-7847884cd9-kk6nq\" (UID: \"6735e842-6df1-42a0-91f5-f32f1dc44c19\") " pod="calico-system/calico-typha-7847884cd9-kk6nq" Jan 14 01:12:14.834873 containerd[2498]: time="2026-01-14T01:12:14.834782779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7847884cd9-kk6nq,Uid:6735e842-6df1-42a0-91f5-f32f1dc44c19,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:14.872763 systemd[1]: Created slice kubepods-besteffort-pod5a7e1a06_2020_4366_a0fb_a0407f838742.slice - libcontainer container kubepods-besteffort-pod5a7e1a06_2020_4366_a0fb_a0407f838742.slice. Jan 14 01:12:14.906094 containerd[2498]: time="2026-01-14T01:12:14.905736708Z" level=info msg="connecting to shim 660b14c223c34bbc74eb4117c8e92ad32fabab8e378b0621ceb9ca5edfcdfcc0" address="unix:///run/containerd/s/c83c84eebdb38d325184f7da417f0f06b58a353eddcfa737a30e5fde7388e147" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:14.921460 kubelet[4014]: I0114 01:12:14.921151 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-cni-bin-dir\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921460 kubelet[4014]: I0114 01:12:14.921188 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-var-run-calico\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921460 kubelet[4014]: I0114 01:12:14.921209 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-xtables-lock\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921460 kubelet[4014]: I0114 01:12:14.921233 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-flexvol-driver-host\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921460 kubelet[4014]: I0114 01:12:14.921250 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-policysync\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921652 kubelet[4014]: I0114 01:12:14.921270 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-cni-log-dir\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921652 kubelet[4014]: I0114 01:12:14.921292 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-cni-net-dir\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921652 kubelet[4014]: I0114 01:12:14.921322 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5a7e1a06-2020-4366-a0fb-a0407f838742-node-certs\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921652 kubelet[4014]: I0114 01:12:14.921343 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-lib-modules\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921652 kubelet[4014]: I0114 01:12:14.921362 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n4dj\" (UniqueName: \"kubernetes.io/projected/5a7e1a06-2020-4366-a0fb-a0407f838742-kube-api-access-6n4dj\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921780 kubelet[4014]: I0114 01:12:14.921384 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a7e1a06-2020-4366-a0fb-a0407f838742-var-lib-calico\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.921780 kubelet[4014]: I0114 01:12:14.921405 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a7e1a06-2020-4366-a0fb-a0407f838742-tigera-ca-bundle\") pod \"calico-node-t7zt4\" (UID: \"5a7e1a06-2020-4366-a0fb-a0407f838742\") " pod="calico-system/calico-node-t7zt4" Jan 14 01:12:14.936143 systemd[1]: Started cri-containerd-660b14c223c34bbc74eb4117c8e92ad32fabab8e378b0621ceb9ca5edfcdfcc0.scope - libcontainer container 660b14c223c34bbc74eb4117c8e92ad32fabab8e378b0621ceb9ca5edfcdfcc0. Jan 14 01:12:14.942000 audit: BPF prog-id=175 op=LOAD Jan 14 01:12:14.942000 audit: BPF prog-id=176 op=LOAD Jan 14 01:12:14.942000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4425 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636306231346332323363333462626337346562343131376338653932 Jan 14 01:12:14.942000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:12:14.942000 audit[4436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636306231346332323363333462626337346562343131376338653932 Jan 14 01:12:14.942000 audit: BPF prog-id=177 op=LOAD Jan 14 01:12:14.942000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4425 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636306231346332323363333462626337346562343131376338653932 Jan 14 01:12:14.942000 audit: BPF prog-id=178 op=LOAD Jan 14 01:12:14.942000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4425 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636306231346332323363333462626337346562343131376338653932 Jan 14 01:12:14.942000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:12:14.942000 audit[4436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636306231346332323363333462626337346562343131376338653932 Jan 14 01:12:14.942000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:12:14.942000 audit[4436]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.942000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636306231346332323363333462626337346562343131376338653932 Jan 14 01:12:14.943000 audit: BPF prog-id=179 op=LOAD Jan 14 01:12:14.943000 audit[4436]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4425 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:14.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636306231346332323363333462626337346562343131376338653932 Jan 14 01:12:14.982525 containerd[2498]: time="2026-01-14T01:12:14.982494918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7847884cd9-kk6nq,Uid:6735e842-6df1-42a0-91f5-f32f1dc44c19,Namespace:calico-system,Attempt:0,} returns sandbox id \"660b14c223c34bbc74eb4117c8e92ad32fabab8e378b0621ceb9ca5edfcdfcc0\"" Jan 14 01:12:14.983832 containerd[2498]: time="2026-01-14T01:12:14.983808582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:12:15.022928 kubelet[4014]: E0114 01:12:15.022781 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.022928 kubelet[4014]: W0114 01:12:15.022801 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.022928 kubelet[4014]: E0114 01:12:15.022819 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.023209 kubelet[4014]: E0114 01:12:15.023192 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.023209 kubelet[4014]: W0114 01:12:15.023205 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.023277 kubelet[4014]: E0114 01:12:15.023220 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.023383 kubelet[4014]: E0114 01:12:15.023375 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.023413 kubelet[4014]: W0114 01:12:15.023392 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.023413 kubelet[4014]: E0114 01:12:15.023402 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.023572 kubelet[4014]: E0114 01:12:15.023561 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.023610 kubelet[4014]: W0114 01:12:15.023580 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.023610 kubelet[4014]: E0114 01:12:15.023589 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.023755 kubelet[4014]: E0114 01:12:15.023743 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.023755 kubelet[4014]: W0114 01:12:15.023751 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.023813 kubelet[4014]: E0114 01:12:15.023759 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.023906 kubelet[4014]: E0114 01:12:15.023890 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.023906 kubelet[4014]: W0114 01:12:15.023901 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.023969 kubelet[4014]: E0114 01:12:15.023908 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.024055 kubelet[4014]: E0114 01:12:15.024046 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.024055 kubelet[4014]: W0114 01:12:15.024054 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.024130 kubelet[4014]: E0114 01:12:15.024062 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.024204 kubelet[4014]: E0114 01:12:15.024192 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.024204 kubelet[4014]: W0114 01:12:15.024200 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.024284 kubelet[4014]: E0114 01:12:15.024207 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.024343 kubelet[4014]: E0114 01:12:15.024334 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.024372 kubelet[4014]: W0114 01:12:15.024342 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.024372 kubelet[4014]: E0114 01:12:15.024350 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.024510 kubelet[4014]: E0114 01:12:15.024491 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.024510 kubelet[4014]: W0114 01:12:15.024505 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.024575 kubelet[4014]: E0114 01:12:15.024511 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.024691 kubelet[4014]: E0114 01:12:15.024668 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.024691 kubelet[4014]: W0114 01:12:15.024680 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.024691 kubelet[4014]: E0114 01:12:15.024688 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.025063 kubelet[4014]: E0114 01:12:15.024802 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.025063 kubelet[4014]: W0114 01:12:15.024806 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.025063 kubelet[4014]: E0114 01:12:15.024813 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.025063 kubelet[4014]: E0114 01:12:15.025088 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.025441 kubelet[4014]: W0114 01:12:15.025096 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.025441 kubelet[4014]: E0114 01:12:15.025105 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.025441 kubelet[4014]: E0114 01:12:15.025318 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.025441 kubelet[4014]: W0114 01:12:15.025324 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.025441 kubelet[4014]: E0114 01:12:15.025331 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.025569 kubelet[4014]: E0114 01:12:15.025479 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.025569 kubelet[4014]: W0114 01:12:15.025486 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.025569 kubelet[4014]: E0114 01:12:15.025494 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.025733 kubelet[4014]: E0114 01:12:15.025723 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.025733 kubelet[4014]: W0114 01:12:15.025733 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.026129 kubelet[4014]: E0114 01:12:15.025743 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.026129 kubelet[4014]: E0114 01:12:15.025884 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.026129 kubelet[4014]: W0114 01:12:15.025890 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.026129 kubelet[4014]: E0114 01:12:15.025898 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.026129 kubelet[4014]: E0114 01:12:15.026048 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.026129 kubelet[4014]: W0114 01:12:15.026086 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.026129 kubelet[4014]: E0114 01:12:15.026096 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.026470 kubelet[4014]: E0114 01:12:15.026254 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.026470 kubelet[4014]: W0114 01:12:15.026259 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.026470 kubelet[4014]: E0114 01:12:15.026266 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.026470 kubelet[4014]: E0114 01:12:15.026383 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.026470 kubelet[4014]: W0114 01:12:15.026402 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.026470 kubelet[4014]: E0114 01:12:15.026408 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.026729 kubelet[4014]: E0114 01:12:15.026572 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.026729 kubelet[4014]: W0114 01:12:15.026578 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.026729 kubelet[4014]: E0114 01:12:15.026591 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.026903 kubelet[4014]: E0114 01:12:15.026885 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.027139 kubelet[4014]: W0114 01:12:15.027054 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.027139 kubelet[4014]: E0114 01:12:15.027082 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.027290 kubelet[4014]: E0114 01:12:15.027282 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.027367 kubelet[4014]: W0114 01:12:15.027359 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.027424 kubelet[4014]: E0114 01:12:15.027404 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.027645 kubelet[4014]: E0114 01:12:15.027618 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.027645 kubelet[4014]: W0114 01:12:15.027626 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.027645 kubelet[4014]: E0114 01:12:15.027635 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.027951 kubelet[4014]: E0114 01:12:15.027924 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.027951 kubelet[4014]: W0114 01:12:15.027932 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.027951 kubelet[4014]: E0114 01:12:15.027941 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.028299 kubelet[4014]: E0114 01:12:15.028256 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.028299 kubelet[4014]: W0114 01:12:15.028266 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.028299 kubelet[4014]: E0114 01:12:15.028276 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.032448 kubelet[4014]: E0114 01:12:15.032406 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.032448 kubelet[4014]: W0114 01:12:15.032419 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.032572 kubelet[4014]: E0114 01:12:15.032431 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.123221 kubelet[4014]: E0114 01:12:15.123195 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.123359 kubelet[4014]: W0114 01:12:15.123311 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.123359 kubelet[4014]: E0114 01:12:15.123331 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.175546 containerd[2498]: time="2026-01-14T01:12:15.175516212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t7zt4,Uid:5a7e1a06-2020-4366-a0fb-a0407f838742,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:15.213823 kubelet[4014]: E0114 01:12:15.213786 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:15.222347 containerd[2498]: time="2026-01-14T01:12:15.222275601Z" level=info msg="connecting to shim 1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af" address="unix:///run/containerd/s/76f98005099cb3c0d9534537ea906be044b4d5f8032d5891f47e90ed3e8a4380" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:15.240186 systemd[1]: Started cri-containerd-1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af.scope - libcontainer container 1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af. Jan 14 01:12:15.246000 audit: BPF prog-id=180 op=LOAD Jan 14 01:12:15.247000 audit: BPF prog-id=181 op=LOAD Jan 14 01:12:15.247000 audit[4517]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4506 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163303064356130653962383033326666386164383931656433386164 Jan 14 01:12:15.247000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:12:15.247000 audit[4517]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163303064356130653962383033326666386164383931656433386164 Jan 14 01:12:15.247000 audit: BPF prog-id=182 op=LOAD Jan 14 01:12:15.247000 audit[4517]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4506 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163303064356130653962383033326666386164383931656433386164 Jan 14 01:12:15.247000 audit: BPF prog-id=183 op=LOAD Jan 14 01:12:15.247000 audit[4517]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4506 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163303064356130653962383033326666386164383931656433386164 Jan 14 01:12:15.247000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:12:15.247000 audit[4517]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163303064356130653962383033326666386164383931656433386164 Jan 14 01:12:15.247000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:12:15.247000 audit[4517]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163303064356130653962383033326666386164383931656433386164 Jan 14 01:12:15.247000 audit: BPF prog-id=184 op=LOAD Jan 14 01:12:15.247000 audit[4517]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4506 pid=4517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163303064356130653962383033326666386164383931656433386164 Jan 14 01:12:15.263961 containerd[2498]: time="2026-01-14T01:12:15.263898031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t7zt4,Uid:5a7e1a06-2020-4366-a0fb-a0407f838742,Namespace:calico-system,Attempt:0,} returns sandbox id \"1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af\"" Jan 14 01:12:15.290000 audit[4544]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:15.290000 audit[4544]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda3f2a250 a2=0 a3=7ffda3f2a23c items=0 ppid=4120 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:15.294000 audit[4544]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:15.294000 audit[4544]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda3f2a250 a2=0 a3=0 items=0 ppid=4120 pid=4544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:15.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:15.298642 kubelet[4014]: E0114 01:12:15.298620 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.298642 kubelet[4014]: W0114 01:12:15.298637 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.298783 kubelet[4014]: E0114 01:12:15.298654 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.298783 kubelet[4014]: E0114 01:12:15.298773 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.298783 kubelet[4014]: W0114 01:12:15.298779 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.298895 kubelet[4014]: E0114 01:12:15.298787 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.298965 kubelet[4014]: E0114 01:12:15.298945 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.298965 kubelet[4014]: W0114 01:12:15.298954 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.298965 kubelet[4014]: E0114 01:12:15.298963 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.299213 kubelet[4014]: E0114 01:12:15.299189 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.299213 kubelet[4014]: W0114 01:12:15.299208 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.299269 kubelet[4014]: E0114 01:12:15.299218 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.299379 kubelet[4014]: E0114 01:12:15.299357 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.299416 kubelet[4014]: W0114 01:12:15.299402 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.299416 kubelet[4014]: E0114 01:12:15.299413 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.299542 kubelet[4014]: E0114 01:12:15.299529 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.299542 kubelet[4014]: W0114 01:12:15.299539 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.299593 kubelet[4014]: E0114 01:12:15.299547 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.299668 kubelet[4014]: E0114 01:12:15.299657 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.299668 kubelet[4014]: W0114 01:12:15.299665 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.299737 kubelet[4014]: E0114 01:12:15.299673 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.299784 kubelet[4014]: E0114 01:12:15.299777 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.299784 kubelet[4014]: W0114 01:12:15.299783 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.299836 kubelet[4014]: E0114 01:12:15.299789 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.299907 kubelet[4014]: E0114 01:12:15.299891 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.299907 kubelet[4014]: W0114 01:12:15.299902 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.300429 kubelet[4014]: E0114 01:12:15.299909 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.300429 kubelet[4014]: E0114 01:12:15.300018 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.300429 kubelet[4014]: W0114 01:12:15.300023 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.300429 kubelet[4014]: E0114 01:12:15.300029 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.300429 kubelet[4014]: E0114 01:12:15.300118 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.300429 kubelet[4014]: W0114 01:12:15.300124 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.300429 kubelet[4014]: E0114 01:12:15.300131 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.300429 kubelet[4014]: E0114 01:12:15.300227 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.300429 kubelet[4014]: W0114 01:12:15.300232 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.300429 kubelet[4014]: E0114 01:12:15.300238 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.301331 kubelet[4014]: E0114 01:12:15.300329 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.301331 kubelet[4014]: W0114 01:12:15.300333 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.301331 kubelet[4014]: E0114 01:12:15.300339 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.301331 kubelet[4014]: E0114 01:12:15.300420 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.301331 kubelet[4014]: W0114 01:12:15.300424 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.301331 kubelet[4014]: E0114 01:12:15.300430 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.301331 kubelet[4014]: E0114 01:12:15.300561 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.301331 kubelet[4014]: W0114 01:12:15.300566 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.301331 kubelet[4014]: E0114 01:12:15.300570 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.301331 kubelet[4014]: E0114 01:12:15.300692 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.301548 kubelet[4014]: W0114 01:12:15.300697 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.301548 kubelet[4014]: E0114 01:12:15.300717 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.301548 kubelet[4014]: E0114 01:12:15.301060 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.301548 kubelet[4014]: W0114 01:12:15.301070 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.301548 kubelet[4014]: E0114 01:12:15.301080 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.302047 kubelet[4014]: E0114 01:12:15.301779 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.302047 kubelet[4014]: W0114 01:12:15.301793 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.302047 kubelet[4014]: E0114 01:12:15.301806 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.302047 kubelet[4014]: E0114 01:12:15.301966 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.302047 kubelet[4014]: W0114 01:12:15.302005 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.302047 kubelet[4014]: E0114 01:12:15.302014 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.302467 kubelet[4014]: E0114 01:12:15.302329 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.302467 kubelet[4014]: W0114 01:12:15.302337 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.302467 kubelet[4014]: E0114 01:12:15.302347 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.327838 kubelet[4014]: E0114 01:12:15.327818 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.327838 kubelet[4014]: W0114 01:12:15.327834 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.328000 kubelet[4014]: E0114 01:12:15.327846 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.328000 kubelet[4014]: I0114 01:12:15.327869 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a24a17a9-73d6-4ce8-b8ef-5be32d60ba56-varrun\") pod \"csi-node-driver-96vlw\" (UID: \"a24a17a9-73d6-4ce8-b8ef-5be32d60ba56\") " pod="calico-system/csi-node-driver-96vlw" Jan 14 01:12:15.328096 kubelet[4014]: E0114 01:12:15.328015 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.328096 kubelet[4014]: W0114 01:12:15.328023 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.328096 kubelet[4014]: E0114 01:12:15.328033 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.328096 kubelet[4014]: I0114 01:12:15.328052 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a24a17a9-73d6-4ce8-b8ef-5be32d60ba56-socket-dir\") pod \"csi-node-driver-96vlw\" (UID: \"a24a17a9-73d6-4ce8-b8ef-5be32d60ba56\") " pod="calico-system/csi-node-driver-96vlw" Jan 14 01:12:15.328258 kubelet[4014]: E0114 01:12:15.328158 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.328258 kubelet[4014]: W0114 01:12:15.328166 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.328258 kubelet[4014]: E0114 01:12:15.328173 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.328258 kubelet[4014]: I0114 01:12:15.328191 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a24a17a9-73d6-4ce8-b8ef-5be32d60ba56-registration-dir\") pod \"csi-node-driver-96vlw\" (UID: \"a24a17a9-73d6-4ce8-b8ef-5be32d60ba56\") " pod="calico-system/csi-node-driver-96vlw" Jan 14 01:12:15.328455 kubelet[4014]: E0114 01:12:15.328445 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.328501 kubelet[4014]: W0114 01:12:15.328486 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.328501 kubelet[4014]: E0114 01:12:15.328499 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.328617 kubelet[4014]: E0114 01:12:15.328600 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.328617 kubelet[4014]: W0114 01:12:15.328608 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.328617 kubelet[4014]: E0114 01:12:15.328615 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.328750 kubelet[4014]: E0114 01:12:15.328737 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.328750 kubelet[4014]: W0114 01:12:15.328745 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.328750 kubelet[4014]: E0114 01:12:15.328752 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.328871 kubelet[4014]: E0114 01:12:15.328841 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.328871 kubelet[4014]: W0114 01:12:15.328846 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.328871 kubelet[4014]: E0114 01:12:15.328852 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.329048 kubelet[4014]: E0114 01:12:15.328959 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.329048 kubelet[4014]: W0114 01:12:15.328964 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.329048 kubelet[4014]: E0114 01:12:15.328985 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.329048 kubelet[4014]: I0114 01:12:15.329005 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdjpj\" (UniqueName: \"kubernetes.io/projected/a24a17a9-73d6-4ce8-b8ef-5be32d60ba56-kube-api-access-bdjpj\") pod \"csi-node-driver-96vlw\" (UID: \"a24a17a9-73d6-4ce8-b8ef-5be32d60ba56\") " pod="calico-system/csi-node-driver-96vlw" Jan 14 01:12:15.329181 kubelet[4014]: E0114 01:12:15.329163 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.329181 kubelet[4014]: W0114 01:12:15.329173 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.329272 kubelet[4014]: E0114 01:12:15.329181 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.329302 kubelet[4014]: E0114 01:12:15.329286 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.329302 kubelet[4014]: W0114 01:12:15.329300 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.329424 kubelet[4014]: E0114 01:12:15.329307 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.329424 kubelet[4014]: E0114 01:12:15.329423 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.329504 kubelet[4014]: W0114 01:12:15.329428 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.329504 kubelet[4014]: E0114 01:12:15.329436 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.329504 kubelet[4014]: I0114 01:12:15.329455 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a24a17a9-73d6-4ce8-b8ef-5be32d60ba56-kubelet-dir\") pod \"csi-node-driver-96vlw\" (UID: \"a24a17a9-73d6-4ce8-b8ef-5be32d60ba56\") " pod="calico-system/csi-node-driver-96vlw" Jan 14 01:12:15.329635 kubelet[4014]: E0114 01:12:15.329614 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.329635 kubelet[4014]: W0114 01:12:15.329622 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.329635 kubelet[4014]: E0114 01:12:15.329630 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.329772 kubelet[4014]: E0114 01:12:15.329757 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.329772 kubelet[4014]: W0114 01:12:15.329767 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.329836 kubelet[4014]: E0114 01:12:15.329775 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.329923 kubelet[4014]: E0114 01:12:15.329912 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.329923 kubelet[4014]: W0114 01:12:15.329921 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.330014 kubelet[4014]: E0114 01:12:15.329928 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.330091 kubelet[4014]: E0114 01:12:15.330080 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.330091 kubelet[4014]: W0114 01:12:15.330088 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.330170 kubelet[4014]: E0114 01:12:15.330096 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.431286 kubelet[4014]: E0114 01:12:15.430065 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.431286 kubelet[4014]: W0114 01:12:15.430101 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.431286 kubelet[4014]: E0114 01:12:15.430117 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.431286 kubelet[4014]: E0114 01:12:15.430261 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.431286 kubelet[4014]: W0114 01:12:15.430267 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.431286 kubelet[4014]: E0114 01:12:15.430275 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.431286 kubelet[4014]: E0114 01:12:15.430409 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.431286 kubelet[4014]: W0114 01:12:15.430416 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.431286 kubelet[4014]: E0114 01:12:15.430423 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.431286 kubelet[4014]: E0114 01:12:15.430551 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.431786 kubelet[4014]: W0114 01:12:15.430555 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.431786 kubelet[4014]: E0114 01:12:15.430561 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.431786 kubelet[4014]: E0114 01:12:15.430681 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.431786 kubelet[4014]: W0114 01:12:15.430686 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.431786 kubelet[4014]: E0114 01:12:15.430692 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.431786 kubelet[4014]: E0114 01:12:15.430892 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.431786 kubelet[4014]: W0114 01:12:15.430907 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.431786 kubelet[4014]: E0114 01:12:15.430913 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.431786 kubelet[4014]: E0114 01:12:15.431059 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.431786 kubelet[4014]: W0114 01:12:15.431066 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.432221 kubelet[4014]: E0114 01:12:15.431071 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.432221 kubelet[4014]: E0114 01:12:15.431211 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.432221 kubelet[4014]: W0114 01:12:15.431217 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.432221 kubelet[4014]: E0114 01:12:15.431222 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.432221 kubelet[4014]: E0114 01:12:15.431345 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.432221 kubelet[4014]: W0114 01:12:15.431364 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.432221 kubelet[4014]: E0114 01:12:15.431370 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.432221 kubelet[4014]: E0114 01:12:15.431471 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.432221 kubelet[4014]: W0114 01:12:15.431477 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.432221 kubelet[4014]: E0114 01:12:15.431483 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.433396 kubelet[4014]: E0114 01:12:15.431609 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.433396 kubelet[4014]: W0114 01:12:15.431615 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.433396 kubelet[4014]: E0114 01:12:15.431621 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.433396 kubelet[4014]: E0114 01:12:15.431786 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.433396 kubelet[4014]: W0114 01:12:15.431792 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.433396 kubelet[4014]: E0114 01:12:15.431799 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.433396 kubelet[4014]: E0114 01:12:15.431955 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.433396 kubelet[4014]: W0114 01:12:15.431961 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.433396 kubelet[4014]: E0114 01:12:15.431969 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.433396 kubelet[4014]: E0114 01:12:15.432152 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.434569 kubelet[4014]: W0114 01:12:15.432164 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.434569 kubelet[4014]: E0114 01:12:15.432173 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.434569 kubelet[4014]: E0114 01:12:15.432298 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.434569 kubelet[4014]: W0114 01:12:15.432303 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.434569 kubelet[4014]: E0114 01:12:15.432311 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.434569 kubelet[4014]: E0114 01:12:15.432452 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.434569 kubelet[4014]: W0114 01:12:15.432459 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.434569 kubelet[4014]: E0114 01:12:15.432466 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.434569 kubelet[4014]: E0114 01:12:15.432598 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.434569 kubelet[4014]: W0114 01:12:15.432603 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435261 kubelet[4014]: E0114 01:12:15.432610 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435261 kubelet[4014]: E0114 01:12:15.432770 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435261 kubelet[4014]: W0114 01:12:15.432775 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435261 kubelet[4014]: E0114 01:12:15.432781 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435261 kubelet[4014]: E0114 01:12:15.432922 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435261 kubelet[4014]: W0114 01:12:15.432929 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435261 kubelet[4014]: E0114 01:12:15.432935 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435261 kubelet[4014]: E0114 01:12:15.433119 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435261 kubelet[4014]: W0114 01:12:15.433127 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435261 kubelet[4014]: E0114 01:12:15.433135 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435632 kubelet[4014]: E0114 01:12:15.434291 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435632 kubelet[4014]: W0114 01:12:15.434303 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435632 kubelet[4014]: E0114 01:12:15.434316 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435632 kubelet[4014]: E0114 01:12:15.434470 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435632 kubelet[4014]: W0114 01:12:15.434476 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435632 kubelet[4014]: E0114 01:12:15.434484 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435632 kubelet[4014]: E0114 01:12:15.434660 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435632 kubelet[4014]: W0114 01:12:15.434666 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435632 kubelet[4014]: E0114 01:12:15.434674 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435632 kubelet[4014]: E0114 01:12:15.434835 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435858 kubelet[4014]: W0114 01:12:15.434840 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435858 kubelet[4014]: E0114 01:12:15.434848 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.435858 kubelet[4014]: E0114 01:12:15.435021 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.435858 kubelet[4014]: W0114 01:12:15.435027 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.435858 kubelet[4014]: E0114 01:12:15.435035 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:15.488585 kubelet[4014]: E0114 01:12:15.488419 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:15.488585 kubelet[4014]: W0114 01:12:15.488437 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:15.488585 kubelet[4014]: E0114 01:12:15.488452 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:16.320278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777801869.mount: Deactivated successfully. Jan 14 01:12:17.317399 kubelet[4014]: E0114 01:12:17.317339 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:17.616729 containerd[2498]: time="2026-01-14T01:12:17.616682315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:17.621423 containerd[2498]: time="2026-01-14T01:12:17.621314065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736634" Jan 14 01:12:17.625018 containerd[2498]: time="2026-01-14T01:12:17.624946565Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:17.631382 containerd[2498]: time="2026-01-14T01:12:17.630985361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:17.631382 containerd[2498]: time="2026-01-14T01:12:17.631290889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.647455006s" Jan 14 01:12:17.631382 containerd[2498]: time="2026-01-14T01:12:17.631314599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:12:17.632662 containerd[2498]: time="2026-01-14T01:12:17.632642047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:12:17.648158 containerd[2498]: time="2026-01-14T01:12:17.648132618Z" level=info msg="CreateContainer within sandbox \"660b14c223c34bbc74eb4117c8e92ad32fabab8e378b0621ceb9ca5edfcdfcc0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:12:17.671153 containerd[2498]: time="2026-01-14T01:12:17.670207640Z" level=info msg="Container 5cd2a7e26c3ac639fb72dae38eec852272148b3ba564810e5233f571162c670b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:17.676223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549264821.mount: Deactivated successfully. Jan 14 01:12:17.695901 containerd[2498]: time="2026-01-14T01:12:17.695874285Z" level=info msg="CreateContainer within sandbox \"660b14c223c34bbc74eb4117c8e92ad32fabab8e378b0621ceb9ca5edfcdfcc0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5cd2a7e26c3ac639fb72dae38eec852272148b3ba564810e5233f571162c670b\"" Jan 14 01:12:17.696457 containerd[2498]: time="2026-01-14T01:12:17.696436647Z" level=info msg="StartContainer for \"5cd2a7e26c3ac639fb72dae38eec852272148b3ba564810e5233f571162c670b\"" Jan 14 01:12:17.697814 containerd[2498]: time="2026-01-14T01:12:17.697788010Z" level=info msg="connecting to shim 5cd2a7e26c3ac639fb72dae38eec852272148b3ba564810e5233f571162c670b" address="unix:///run/containerd/s/c83c84eebdb38d325184f7da417f0f06b58a353eddcfa737a30e5fde7388e147" protocol=ttrpc version=3 Jan 14 01:12:17.717150 systemd[1]: Started cri-containerd-5cd2a7e26c3ac639fb72dae38eec852272148b3ba564810e5233f571162c670b.scope - libcontainer container 5cd2a7e26c3ac639fb72dae38eec852272148b3ba564810e5233f571162c670b. Jan 14 01:12:17.727000 audit: BPF prog-id=185 op=LOAD Jan 14 01:12:17.727000 audit: BPF prog-id=186 op=LOAD Jan 14 01:12:17.727000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4425 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643261376532366333616336333966623732646165333865656338 Jan 14 01:12:17.727000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:12:17.727000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.727000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643261376532366333616336333966623732646165333865656338 Jan 14 01:12:17.728000 audit: BPF prog-id=187 op=LOAD Jan 14 01:12:17.728000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4425 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643261376532366333616336333966623732646165333865656338 Jan 14 01:12:17.728000 audit: BPF prog-id=188 op=LOAD Jan 14 01:12:17.728000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4425 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643261376532366333616336333966623732646165333865656338 Jan 14 01:12:17.728000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:12:17.728000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643261376532366333616336333966623732646165333865656338 Jan 14 01:12:17.728000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:12:17.728000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4425 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643261376532366333616336333966623732646165333865656338 Jan 14 01:12:17.728000 audit: BPF prog-id=189 op=LOAD Jan 14 01:12:17.728000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4425 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:17.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563643261376532366333616336333966623732646165333865656338 Jan 14 01:12:17.765793 containerd[2498]: time="2026-01-14T01:12:17.765713803Z" level=info msg="StartContainer for \"5cd2a7e26c3ac639fb72dae38eec852272148b3ba564810e5233f571162c670b\" returns successfully" Jan 14 01:12:18.421837 kubelet[4014]: I0114 01:12:18.421642 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7847884cd9-kk6nq" podStartSLOduration=1.7729396149999999 podStartE2EDuration="4.421626027s" podCreationTimestamp="2026-01-14 01:12:14 +0000 UTC" firstStartedPulling="2026-01-14 01:12:14.983388481 +0000 UTC m=+20.772013136" lastFinishedPulling="2026-01-14 01:12:17.632074889 +0000 UTC m=+23.420699548" observedRunningTime="2026-01-14 01:12:18.41972716 +0000 UTC m=+24.208351851" watchObservedRunningTime="2026-01-14 01:12:18.421626027 +0000 UTC m=+24.210250699" Jan 14 01:12:18.424101 kubelet[4014]: E0114 01:12:18.424077 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424101 kubelet[4014]: W0114 01:12:18.424095 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424262 kubelet[4014]: E0114 01:12:18.424114 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.424262 kubelet[4014]: E0114 01:12:18.424239 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424262 kubelet[4014]: W0114 01:12:18.424245 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424262 kubelet[4014]: E0114 01:12:18.424253 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.424408 kubelet[4014]: E0114 01:12:18.424350 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424408 kubelet[4014]: W0114 01:12:18.424356 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424408 kubelet[4014]: E0114 01:12:18.424362 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.424495 kubelet[4014]: E0114 01:12:18.424492 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424519 kubelet[4014]: W0114 01:12:18.424498 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424519 kubelet[4014]: E0114 01:12:18.424506 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.424615 kubelet[4014]: E0114 01:12:18.424599 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424615 kubelet[4014]: W0114 01:12:18.424607 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424682 kubelet[4014]: E0114 01:12:18.424615 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.424724 kubelet[4014]: E0114 01:12:18.424701 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424724 kubelet[4014]: W0114 01:12:18.424706 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424724 kubelet[4014]: E0114 01:12:18.424713 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.424829 kubelet[4014]: E0114 01:12:18.424799 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424829 kubelet[4014]: W0114 01:12:18.424804 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424829 kubelet[4014]: E0114 01:12:18.424810 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.424923 kubelet[4014]: E0114 01:12:18.424898 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.424923 kubelet[4014]: W0114 01:12:18.424902 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.424923 kubelet[4014]: E0114 01:12:18.424908 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.425047 kubelet[4014]: E0114 01:12:18.425024 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.425047 kubelet[4014]: W0114 01:12:18.425030 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.425047 kubelet[4014]: E0114 01:12:18.425036 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.425146 kubelet[4014]: E0114 01:12:18.425132 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.425146 kubelet[4014]: W0114 01:12:18.425137 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.425146 kubelet[4014]: E0114 01:12:18.425143 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.425242 kubelet[4014]: E0114 01:12:18.425229 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.425242 kubelet[4014]: W0114 01:12:18.425233 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.425242 kubelet[4014]: E0114 01:12:18.425239 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.425412 kubelet[4014]: E0114 01:12:18.425386 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.425412 kubelet[4014]: W0114 01:12:18.425407 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.425482 kubelet[4014]: E0114 01:12:18.425415 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.425531 kubelet[4014]: E0114 01:12:18.425515 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.425531 kubelet[4014]: W0114 01:12:18.425522 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.425531 kubelet[4014]: E0114 01:12:18.425528 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.425648 kubelet[4014]: E0114 01:12:18.425618 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.425648 kubelet[4014]: W0114 01:12:18.425623 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.425648 kubelet[4014]: E0114 01:12:18.425630 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.425750 kubelet[4014]: E0114 01:12:18.425740 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.425750 kubelet[4014]: W0114 01:12:18.425747 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.425805 kubelet[4014]: E0114 01:12:18.425754 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.451757 kubelet[4014]: E0114 01:12:18.451734 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.451757 kubelet[4014]: W0114 01:12:18.451750 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.452008 kubelet[4014]: E0114 01:12:18.451765 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.452008 kubelet[4014]: E0114 01:12:18.451904 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.452008 kubelet[4014]: W0114 01:12:18.451910 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.452008 kubelet[4014]: E0114 01:12:18.451918 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.452135 kubelet[4014]: E0114 01:12:18.452097 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.452135 kubelet[4014]: W0114 01:12:18.452107 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.452135 kubelet[4014]: E0114 01:12:18.452117 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.452249 kubelet[4014]: E0114 01:12:18.452236 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.452249 kubelet[4014]: W0114 01:12:18.452245 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.452306 kubelet[4014]: E0114 01:12:18.452252 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.452385 kubelet[4014]: E0114 01:12:18.452373 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.452385 kubelet[4014]: W0114 01:12:18.452381 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.452451 kubelet[4014]: E0114 01:12:18.452388 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.452631 kubelet[4014]: E0114 01:12:18.452605 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.452631 kubelet[4014]: W0114 01:12:18.452630 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.452700 kubelet[4014]: E0114 01:12:18.452639 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.452834 kubelet[4014]: E0114 01:12:18.452821 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.452834 kubelet[4014]: W0114 01:12:18.452831 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.452889 kubelet[4014]: E0114 01:12:18.452841 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.453214 kubelet[4014]: E0114 01:12:18.452998 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.453214 kubelet[4014]: W0114 01:12:18.453004 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.453214 kubelet[4014]: E0114 01:12:18.453012 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.453214 kubelet[4014]: E0114 01:12:18.453122 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.453214 kubelet[4014]: W0114 01:12:18.453127 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.453214 kubelet[4014]: E0114 01:12:18.453133 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.453376 kubelet[4014]: E0114 01:12:18.453222 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.453376 kubelet[4014]: W0114 01:12:18.453227 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.453376 kubelet[4014]: E0114 01:12:18.453234 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.453376 kubelet[4014]: E0114 01:12:18.453340 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.453376 kubelet[4014]: W0114 01:12:18.453346 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.453376 kubelet[4014]: E0114 01:12:18.453353 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.453676 kubelet[4014]: E0114 01:12:18.453663 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.453676 kubelet[4014]: W0114 01:12:18.453674 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.453741 kubelet[4014]: E0114 01:12:18.453683 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.453845 kubelet[4014]: E0114 01:12:18.453833 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.453845 kubelet[4014]: W0114 01:12:18.453841 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.453956 kubelet[4014]: E0114 01:12:18.453849 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.454153 kubelet[4014]: E0114 01:12:18.454118 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.454153 kubelet[4014]: W0114 01:12:18.454129 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.454153 kubelet[4014]: E0114 01:12:18.454139 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.454707 kubelet[4014]: E0114 01:12:18.454688 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.454707 kubelet[4014]: W0114 01:12:18.454705 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.455054 kubelet[4014]: E0114 01:12:18.454720 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.455054 kubelet[4014]: E0114 01:12:18.454854 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.455054 kubelet[4014]: W0114 01:12:18.454861 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.455054 kubelet[4014]: E0114 01:12:18.454869 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.455579 kubelet[4014]: E0114 01:12:18.455091 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.455579 kubelet[4014]: W0114 01:12:18.455099 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.455579 kubelet[4014]: E0114 01:12:18.455108 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:18.456006 kubelet[4014]: E0114 01:12:18.455895 4014 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:12:18.456006 kubelet[4014]: W0114 01:12:18.455910 4014 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:12:18.456006 kubelet[4014]: E0114 01:12:18.455922 4014 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:12:19.041022 containerd[2498]: time="2026-01-14T01:12:19.040272825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:19.045096 containerd[2498]: time="2026-01-14T01:12:19.045068614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=741" Jan 14 01:12:19.051613 containerd[2498]: time="2026-01-14T01:12:19.051561919Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:19.055362 containerd[2498]: time="2026-01-14T01:12:19.055317987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:19.055816 containerd[2498]: time="2026-01-14T01:12:19.055791901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.423040752s" Jan 14 01:12:19.055862 containerd[2498]: time="2026-01-14T01:12:19.055817462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:12:19.062500 containerd[2498]: time="2026-01-14T01:12:19.062475871Z" level=info msg="CreateContainer within sandbox \"1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:12:19.090920 containerd[2498]: time="2026-01-14T01:12:19.090070462Z" level=info msg="Container 66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:19.106948 containerd[2498]: time="2026-01-14T01:12:19.106923534Z" level=info msg="CreateContainer within sandbox \"1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397\"" Jan 14 01:12:19.107535 containerd[2498]: time="2026-01-14T01:12:19.107505645Z" level=info msg="StartContainer for \"66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397\"" Jan 14 01:12:19.109586 containerd[2498]: time="2026-01-14T01:12:19.109261120Z" level=info msg="connecting to shim 66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397" address="unix:///run/containerd/s/76f98005099cb3c0d9534537ea906be044b4d5f8032d5891f47e90ed3e8a4380" protocol=ttrpc version=3 Jan 14 01:12:19.135176 systemd[1]: Started cri-containerd-66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397.scope - libcontainer container 66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397. Jan 14 01:12:19.162000 audit: BPF prog-id=190 op=LOAD Jan 14 01:12:19.164647 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 14 01:12:19.164714 kernel: audit: type=1334 audit(1768353139.162:594): prog-id=190 op=LOAD Jan 14 01:12:19.162000 audit[4699]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.170231 kernel: audit: type=1300 audit(1768353139.162:594): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.177544 kernel: audit: type=1327 audit(1768353139.162:594): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.179378 kernel: audit: type=1334 audit(1768353139.162:595): prog-id=191 op=LOAD Jan 14 01:12:19.162000 audit: BPF prog-id=191 op=LOAD Jan 14 01:12:19.162000 audit[4699]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.183560 kernel: audit: type=1300 audit(1768353139.162:595): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.187885 kernel: audit: type=1327 audit(1768353139.162:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.191042 kernel: audit: type=1334 audit(1768353139.162:596): prog-id=191 op=UNLOAD Jan 14 01:12:19.162000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:12:19.200111 kernel: audit: type=1300 audit(1768353139.162:596): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.162000 audit[4699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.205947 kernel: audit: type=1327 audit(1768353139.162:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.206689 kernel: audit: type=1334 audit(1768353139.162:597): prog-id=190 op=UNLOAD Jan 14 01:12:19.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.162000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:12:19.162000 audit[4699]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.162000 audit: BPF prog-id=192 op=LOAD Jan 14 01:12:19.162000 audit[4699]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4506 pid=4699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:19.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636613166313265313635343339323833326436376632326262646235 Jan 14 01:12:19.221998 containerd[2498]: time="2026-01-14T01:12:19.221858847Z" level=info msg="StartContainer for \"66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397\" returns successfully" Jan 14 01:12:19.224784 systemd[1]: cri-containerd-66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397.scope: Deactivated successfully. Jan 14 01:12:19.226000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:12:19.229253 containerd[2498]: time="2026-01-14T01:12:19.229226032Z" level=info msg="received container exit event container_id:\"66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397\" id:\"66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397\" pid:4711 exited_at:{seconds:1768353139 nanos:228725414}" Jan 14 01:12:19.248874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66a1f12e1654392832d67f22bbdb54d43a1b990db60dab28a82ef59d849a4397-rootfs.mount: Deactivated successfully. Jan 14 01:12:19.317806 kubelet[4014]: E0114 01:12:19.317697 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:19.392271 kubelet[4014]: I0114 01:12:19.392188 4014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:12:20.397302 containerd[2498]: time="2026-01-14T01:12:20.397265324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:12:21.317238 kubelet[4014]: E0114 01:12:21.317198 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:23.317714 kubelet[4014]: E0114 01:12:23.317668 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:24.131296 containerd[2498]: time="2026-01-14T01:12:24.131251061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:24.134054 containerd[2498]: time="2026-01-14T01:12:24.133923709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70443032" Jan 14 01:12:24.136689 containerd[2498]: time="2026-01-14T01:12:24.136665009Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:24.140509 containerd[2498]: time="2026-01-14T01:12:24.140464519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:24.140914 containerd[2498]: time="2026-01-14T01:12:24.140891182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.743584844s" Jan 14 01:12:24.140956 containerd[2498]: time="2026-01-14T01:12:24.140922402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:12:24.148901 containerd[2498]: time="2026-01-14T01:12:24.148872195Z" level=info msg="CreateContainer within sandbox \"1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:12:24.170060 containerd[2498]: time="2026-01-14T01:12:24.169055328Z" level=info msg="Container f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:24.191111 containerd[2498]: time="2026-01-14T01:12:24.191083177Z" level=info msg="CreateContainer within sandbox \"1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03\"" Jan 14 01:12:24.191724 containerd[2498]: time="2026-01-14T01:12:24.191670686Z" level=info msg="StartContainer for \"f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03\"" Jan 14 01:12:24.193495 containerd[2498]: time="2026-01-14T01:12:24.193461513Z" level=info msg="connecting to shim f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03" address="unix:///run/containerd/s/76f98005099cb3c0d9534537ea906be044b4d5f8032d5891f47e90ed3e8a4380" protocol=ttrpc version=3 Jan 14 01:12:24.221147 systemd[1]: Started cri-containerd-f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03.scope - libcontainer container f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03. Jan 14 01:12:24.259000 audit: BPF prog-id=193 op=LOAD Jan 14 01:12:24.262638 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:12:24.262721 kernel: audit: type=1334 audit(1768353144.259:600): prog-id=193 op=LOAD Jan 14 01:12:24.259000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.269992 kernel: audit: type=1300 audit(1768353144.259:600): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.274813 kernel: audit: type=1327 audit(1768353144.259:600): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.260000 audit: BPF prog-id=194 op=LOAD Jan 14 01:12:24.277942 kernel: audit: type=1334 audit(1768353144.260:601): prog-id=194 op=LOAD Jan 14 01:12:24.260000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.286101 kernel: audit: type=1300 audit(1768353144.260:601): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.295033 kernel: audit: type=1327 audit(1768353144.260:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.260000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:12:24.298048 kernel: audit: type=1334 audit(1768353144.260:602): prog-id=194 op=UNLOAD Jan 14 01:12:24.260000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.303284 kernel: audit: type=1300 audit(1768353144.260:602): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.260000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:12:24.309854 kernel: audit: type=1327 audit(1768353144.260:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.309904 kernel: audit: type=1334 audit(1768353144.260:603): prog-id=193 op=UNLOAD Jan 14 01:12:24.260000 audit[4757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.260000 audit: BPF prog-id=195 op=LOAD Jan 14 01:12:24.260000 audit[4757]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4506 pid=4757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:24.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634353132373366613639356237323039633365633334313765363231 Jan 14 01:12:24.314362 containerd[2498]: time="2026-01-14T01:12:24.314334916Z" level=info msg="StartContainer for \"f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03\" returns successfully" Jan 14 01:12:25.317385 kubelet[4014]: E0114 01:12:25.317347 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:25.571622 systemd[1]: cri-containerd-f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03.scope: Deactivated successfully. Jan 14 01:12:25.572479 systemd[1]: cri-containerd-f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03.scope: Consumed 438ms CPU time, 194.2M memory peak, 171.3M written to disk. Jan 14 01:12:25.574907 containerd[2498]: time="2026-01-14T01:12:25.574863992Z" level=info msg="received container exit event container_id:\"f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03\" id:\"f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03\" pid:4769 exited_at:{seconds:1768353145 nanos:574665041}" Jan 14 01:12:25.575000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:12:25.594075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f451273fa695b7209c3ec3417e621ba1b41e8ab3826b711ca107953f11423e03-rootfs.mount: Deactivated successfully. Jan 14 01:12:25.612006 kubelet[4014]: I0114 01:12:25.611507 4014 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:12:25.957774 systemd[1]: Created slice kubepods-burstable-pod95e43c77_2bc5_456f_aaea_ff54d5c19984.slice - libcontainer container kubepods-burstable-pod95e43c77_2bc5_456f_aaea_ff54d5c19984.slice. Jan 14 01:12:26.070003 kubelet[4014]: I0114 01:12:26.004932 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpml4\" (UniqueName: \"kubernetes.io/projected/95e43c77-2bc5-456f-aaea-ff54d5c19984-kube-api-access-zpml4\") pod \"coredns-674b8bbfcf-t9cxc\" (UID: \"95e43c77-2bc5-456f-aaea-ff54d5c19984\") " pod="kube-system/coredns-674b8bbfcf-t9cxc" Jan 14 01:12:26.070003 kubelet[4014]: I0114 01:12:26.004964 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95e43c77-2bc5-456f-aaea-ff54d5c19984-config-volume\") pod \"coredns-674b8bbfcf-t9cxc\" (UID: \"95e43c77-2bc5-456f-aaea-ff54d5c19984\") " pod="kube-system/coredns-674b8bbfcf-t9cxc" Jan 14 01:12:26.082569 systemd[1]: Created slice kubepods-burstable-pod71463554_7ede_47f5_b4f2_9e4bbfb9f8b1.slice - libcontainer container kubepods-burstable-pod71463554_7ede_47f5_b4f2_9e4bbfb9f8b1.slice. Jan 14 01:12:26.206582 kubelet[4014]: I0114 01:12:26.206523 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71463554-7ede-47f5-b4f2-9e4bbfb9f8b1-config-volume\") pod \"coredns-674b8bbfcf-47d4s\" (UID: \"71463554-7ede-47f5-b4f2-9e4bbfb9f8b1\") " pod="kube-system/coredns-674b8bbfcf-47d4s" Jan 14 01:12:26.206752 kubelet[4014]: I0114 01:12:26.206609 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prt5f\" (UniqueName: \"kubernetes.io/projected/71463554-7ede-47f5-b4f2-9e4bbfb9f8b1-kube-api-access-prt5f\") pod \"coredns-674b8bbfcf-47d4s\" (UID: \"71463554-7ede-47f5-b4f2-9e4bbfb9f8b1\") " pod="kube-system/coredns-674b8bbfcf-47d4s" Jan 14 01:12:26.375033 containerd[2498]: time="2026-01-14T01:12:26.374957877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cxc,Uid:95e43c77-2bc5-456f-aaea-ff54d5c19984,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:26.379428 systemd[1]: Created slice kubepods-besteffort-pod486952cc_8944_4287_a101_bc04fbfa2173.slice - libcontainer container kubepods-besteffort-pod486952cc_8944_4287_a101_bc04fbfa2173.slice. Jan 14 01:12:26.384911 containerd[2498]: time="2026-01-14T01:12:26.384877131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47d4s,Uid:71463554-7ede-47f5-b4f2-9e4bbfb9f8b1,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:26.408194 kubelet[4014]: I0114 01:12:26.408169 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486952cc-8944-4287-a101-bc04fbfa2173-tigera-ca-bundle\") pod \"calico-kube-controllers-7c84b9c95c-shkh8\" (UID: \"486952cc-8944-4287-a101-bc04fbfa2173\") " pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" Jan 14 01:12:26.408460 kubelet[4014]: I0114 01:12:26.408206 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmt9\" (UniqueName: \"kubernetes.io/projected/486952cc-8944-4287-a101-bc04fbfa2173-kube-api-access-znmt9\") pod \"calico-kube-controllers-7c84b9c95c-shkh8\" (UID: \"486952cc-8944-4287-a101-bc04fbfa2173\") " pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" Jan 14 01:12:26.519260 systemd[1]: Created slice kubepods-besteffort-pod7c277774_5617_4094_89b3_d4c788250cae.slice - libcontainer container kubepods-besteffort-pod7c277774_5617_4094_89b3_d4c788250cae.slice. Jan 14 01:12:26.554680 systemd[1]: Created slice kubepods-besteffort-pod2787023d_e18f_4684_a5a9_b7a3e47eb555.slice - libcontainer container kubepods-besteffort-pod2787023d_e18f_4684_a5a9_b7a3e47eb555.slice. Jan 14 01:12:26.571756 systemd[1]: Created slice kubepods-besteffort-pode0f0f92f_0f7f_41a7_be1b_6c10ab1af0c8.slice - libcontainer container kubepods-besteffort-pode0f0f92f_0f7f_41a7_be1b_6c10ab1af0c8.slice. Jan 14 01:12:26.581387 systemd[1]: Created slice kubepods-besteffort-pod8612edc9_7707_465b_bd59_44c1e0af599e.slice - libcontainer container kubepods-besteffort-pod8612edc9_7707_465b_bd59_44c1e0af599e.slice. Jan 14 01:12:26.610867 kubelet[4014]: I0114 01:12:26.610522 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8-config\") pod \"goldmane-666569f655-rz5pp\" (UID: \"e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8\") " pod="calico-system/goldmane-666569f655-rz5pp" Jan 14 01:12:26.610867 kubelet[4014]: I0114 01:12:26.610558 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8-goldmane-ca-bundle\") pod \"goldmane-666569f655-rz5pp\" (UID: \"e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8\") " pod="calico-system/goldmane-666569f655-rz5pp" Jan 14 01:12:26.610867 kubelet[4014]: I0114 01:12:26.610579 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzbd\" (UniqueName: \"kubernetes.io/projected/e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8-kube-api-access-qkzbd\") pod \"goldmane-666569f655-rz5pp\" (UID: \"e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8\") " pod="calico-system/goldmane-666569f655-rz5pp" Jan 14 01:12:26.610867 kubelet[4014]: I0114 01:12:26.610600 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c277774-5617-4094-89b3-d4c788250cae-calico-apiserver-certs\") pod \"calico-apiserver-78d8c97c7f-fb6hx\" (UID: \"7c277774-5617-4094-89b3-d4c788250cae\") " pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" Jan 14 01:12:26.610867 kubelet[4014]: I0114 01:12:26.610620 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkpqk\" (UniqueName: \"kubernetes.io/projected/7c277774-5617-4094-89b3-d4c788250cae-kube-api-access-fkpqk\") pod \"calico-apiserver-78d8c97c7f-fb6hx\" (UID: \"7c277774-5617-4094-89b3-d4c788250cae\") " pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" Jan 14 01:12:26.611108 kubelet[4014]: I0114 01:12:26.610641 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8612edc9-7707-465b-bd59-44c1e0af599e-calico-apiserver-certs\") pod \"calico-apiserver-78d8c97c7f-cl2ls\" (UID: \"8612edc9-7707-465b-bd59-44c1e0af599e\") " pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" Jan 14 01:12:26.611108 kubelet[4014]: I0114 01:12:26.610661 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbtxc\" (UniqueName: \"kubernetes.io/projected/2787023d-e18f-4684-a5a9-b7a3e47eb555-kube-api-access-wbtxc\") pod \"whisker-575f6df57b-k24hn\" (UID: \"2787023d-e18f-4684-a5a9-b7a3e47eb555\") " pod="calico-system/whisker-575f6df57b-k24hn" Jan 14 01:12:26.611108 kubelet[4014]: I0114 01:12:26.610682 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955v5\" (UniqueName: \"kubernetes.io/projected/8612edc9-7707-465b-bd59-44c1e0af599e-kube-api-access-955v5\") pod \"calico-apiserver-78d8c97c7f-cl2ls\" (UID: \"8612edc9-7707-465b-bd59-44c1e0af599e\") " pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" Jan 14 01:12:26.611108 kubelet[4014]: I0114 01:12:26.610701 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-ca-bundle\") pod \"whisker-575f6df57b-k24hn\" (UID: \"2787023d-e18f-4684-a5a9-b7a3e47eb555\") " pod="calico-system/whisker-575f6df57b-k24hn" Jan 14 01:12:26.611108 kubelet[4014]: I0114 01:12:26.610723 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-backend-key-pair\") pod \"whisker-575f6df57b-k24hn\" (UID: \"2787023d-e18f-4684-a5a9-b7a3e47eb555\") " pod="calico-system/whisker-575f6df57b-k24hn" Jan 14 01:12:26.611240 kubelet[4014]: I0114 01:12:26.610744 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8-goldmane-key-pair\") pod \"goldmane-666569f655-rz5pp\" (UID: \"e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8\") " pod="calico-system/goldmane-666569f655-rz5pp" Jan 14 01:12:26.623529 containerd[2498]: time="2026-01-14T01:12:26.623250007Z" level=error msg="Failed to destroy network for sandbox \"4e3992a2f36a886dcbb55c96fd8671c3485456a257b0d137315094df97b478f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.625591 systemd[1]: run-netns-cni\x2dd31afc88\x2d4f8a\x2d62fc\x2d5d43\x2d89649a1cf854.mount: Deactivated successfully. Jan 14 01:12:26.633951 containerd[2498]: time="2026-01-14T01:12:26.633904269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47d4s,Uid:71463554-7ede-47f5-b4f2-9e4bbfb9f8b1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3992a2f36a886dcbb55c96fd8671c3485456a257b0d137315094df97b478f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.634344 kubelet[4014]: E0114 01:12:26.634314 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3992a2f36a886dcbb55c96fd8671c3485456a257b0d137315094df97b478f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.634473 kubelet[4014]: E0114 01:12:26.634460 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3992a2f36a886dcbb55c96fd8671c3485456a257b0d137315094df97b478f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-47d4s" Jan 14 01:12:26.634560 kubelet[4014]: E0114 01:12:26.634548 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e3992a2f36a886dcbb55c96fd8671c3485456a257b0d137315094df97b478f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-47d4s" Jan 14 01:12:26.634671 kubelet[4014]: E0114 01:12:26.634653 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-47d4s_kube-system(71463554-7ede-47f5-b4f2-9e4bbfb9f8b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-47d4s_kube-system(71463554-7ede-47f5-b4f2-9e4bbfb9f8b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e3992a2f36a886dcbb55c96fd8671c3485456a257b0d137315094df97b478f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-47d4s" podUID="71463554-7ede-47f5-b4f2-9e4bbfb9f8b1" Jan 14 01:12:26.638582 containerd[2498]: time="2026-01-14T01:12:26.638550029Z" level=error msg="Failed to destroy network for sandbox \"bffb232e024e35f31af2e97908eed4bba212d6bda99c39f2e095bb50daf52725\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.640440 systemd[1]: run-netns-cni\x2d6b7ab770\x2d4072\x2dfb79\x2d0792\x2de0e64c41f9fc.mount: Deactivated successfully. Jan 14 01:12:26.645740 containerd[2498]: time="2026-01-14T01:12:26.645708158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cxc,Uid:95e43c77-2bc5-456f-aaea-ff54d5c19984,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bffb232e024e35f31af2e97908eed4bba212d6bda99c39f2e095bb50daf52725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.645915 kubelet[4014]: E0114 01:12:26.645879 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bffb232e024e35f31af2e97908eed4bba212d6bda99c39f2e095bb50daf52725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.645989 kubelet[4014]: E0114 01:12:26.645935 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bffb232e024e35f31af2e97908eed4bba212d6bda99c39f2e095bb50daf52725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9cxc" Jan 14 01:12:26.645989 kubelet[4014]: E0114 01:12:26.645956 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bffb232e024e35f31af2e97908eed4bba212d6bda99c39f2e095bb50daf52725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9cxc" Jan 14 01:12:26.646048 kubelet[4014]: E0114 01:12:26.646027 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t9cxc_kube-system(95e43c77-2bc5-456f-aaea-ff54d5c19984)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t9cxc_kube-system(95e43c77-2bc5-456f-aaea-ff54d5c19984)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bffb232e024e35f31af2e97908eed4bba212d6bda99c39f2e095bb50daf52725\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t9cxc" podUID="95e43c77-2bc5-456f-aaea-ff54d5c19984" Jan 14 01:12:26.682339 containerd[2498]: time="2026-01-14T01:12:26.682314314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84b9c95c-shkh8,Uid:486952cc-8944-4287-a101-bc04fbfa2173,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:26.749686 containerd[2498]: time="2026-01-14T01:12:26.749486671Z" level=error msg="Failed to destroy network for sandbox \"7c7959eeec4e0bc50e66805b93ca49ac0d6f516119b56652d0d214998d26a085\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.761091 containerd[2498]: time="2026-01-14T01:12:26.761050164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84b9c95c-shkh8,Uid:486952cc-8944-4287-a101-bc04fbfa2173,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7959eeec4e0bc50e66805b93ca49ac0d6f516119b56652d0d214998d26a085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.761257 kubelet[4014]: E0114 01:12:26.761226 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7959eeec4e0bc50e66805b93ca49ac0d6f516119b56652d0d214998d26a085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.761310 kubelet[4014]: E0114 01:12:26.761273 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7959eeec4e0bc50e66805b93ca49ac0d6f516119b56652d0d214998d26a085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" Jan 14 01:12:26.761310 kubelet[4014]: E0114 01:12:26.761295 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c7959eeec4e0bc50e66805b93ca49ac0d6f516119b56652d0d214998d26a085\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" Jan 14 01:12:26.761381 kubelet[4014]: E0114 01:12:26.761348 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c84b9c95c-shkh8_calico-system(486952cc-8944-4287-a101-bc04fbfa2173)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c84b9c95c-shkh8_calico-system(486952cc-8944-4287-a101-bc04fbfa2173)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c7959eeec4e0bc50e66805b93ca49ac0d6f516119b56652d0d214998d26a085\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:12:26.839539 containerd[2498]: time="2026-01-14T01:12:26.839506108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-fb6hx,Uid:7c277774-5617-4094-89b3-d4c788250cae,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:26.866255 containerd[2498]: time="2026-01-14T01:12:26.866092499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-575f6df57b-k24hn,Uid:2787023d-e18f-4684-a5a9-b7a3e47eb555,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:26.880036 containerd[2498]: time="2026-01-14T01:12:26.879945888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rz5pp,Uid:e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:26.887007 containerd[2498]: time="2026-01-14T01:12:26.886965780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-cl2ls,Uid:8612edc9-7707-465b-bd59-44c1e0af599e,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:26.894036 containerd[2498]: time="2026-01-14T01:12:26.893998725Z" level=error msg="Failed to destroy network for sandbox \"6832307d8e172cb348f9bd9b573aade85adf44086ad706309f792bff2fbbf58a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.935778 containerd[2498]: time="2026-01-14T01:12:26.935740155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-fb6hx,Uid:7c277774-5617-4094-89b3-d4c788250cae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6832307d8e172cb348f9bd9b573aade85adf44086ad706309f792bff2fbbf58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.936150 kubelet[4014]: E0114 01:12:26.936112 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6832307d8e172cb348f9bd9b573aade85adf44086ad706309f792bff2fbbf58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.936229 kubelet[4014]: E0114 01:12:26.936176 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6832307d8e172cb348f9bd9b573aade85adf44086ad706309f792bff2fbbf58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" Jan 14 01:12:26.936229 kubelet[4014]: E0114 01:12:26.936201 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6832307d8e172cb348f9bd9b573aade85adf44086ad706309f792bff2fbbf58a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" Jan 14 01:12:26.936288 kubelet[4014]: E0114 01:12:26.936262 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d8c97c7f-fb6hx_calico-apiserver(7c277774-5617-4094-89b3-d4c788250cae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d8c97c7f-fb6hx_calico-apiserver(7c277774-5617-4094-89b3-d4c788250cae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6832307d8e172cb348f9bd9b573aade85adf44086ad706309f792bff2fbbf58a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:12:26.961115 containerd[2498]: time="2026-01-14T01:12:26.961063571Z" level=error msg="Failed to destroy network for sandbox \"9f4d5e9e40a1c3804fb419a95572aa5fa8af82ffed2bb984c8f38568f0de2ec0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.965093 containerd[2498]: time="2026-01-14T01:12:26.965041456Z" level=error msg="Failed to destroy network for sandbox \"590b32958f2874cdbd3123af8299fd5f67da311b75ddb7506bb5b70c005ddd5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.968516 containerd[2498]: time="2026-01-14T01:12:26.968261749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rz5pp,Uid:e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4d5e9e40a1c3804fb419a95572aa5fa8af82ffed2bb984c8f38568f0de2ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.968764 kubelet[4014]: E0114 01:12:26.968725 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4d5e9e40a1c3804fb419a95572aa5fa8af82ffed2bb984c8f38568f0de2ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.968829 kubelet[4014]: E0114 01:12:26.968785 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4d5e9e40a1c3804fb419a95572aa5fa8af82ffed2bb984c8f38568f0de2ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rz5pp" Jan 14 01:12:26.968829 kubelet[4014]: E0114 01:12:26.968808 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f4d5e9e40a1c3804fb419a95572aa5fa8af82ffed2bb984c8f38568f0de2ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-rz5pp" Jan 14 01:12:26.968902 kubelet[4014]: E0114 01:12:26.968878 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-rz5pp_calico-system(e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-rz5pp_calico-system(e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f4d5e9e40a1c3804fb419a95572aa5fa8af82ffed2bb984c8f38568f0de2ec0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:12:26.978858 containerd[2498]: time="2026-01-14T01:12:26.978782717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-575f6df57b-k24hn,Uid:2787023d-e18f-4684-a5a9-b7a3e47eb555,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"590b32958f2874cdbd3123af8299fd5f67da311b75ddb7506bb5b70c005ddd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.979508 kubelet[4014]: E0114 01:12:26.979480 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"590b32958f2874cdbd3123af8299fd5f67da311b75ddb7506bb5b70c005ddd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.979933 kubelet[4014]: E0114 01:12:26.979618 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"590b32958f2874cdbd3123af8299fd5f67da311b75ddb7506bb5b70c005ddd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-575f6df57b-k24hn" Jan 14 01:12:26.979933 kubelet[4014]: E0114 01:12:26.979644 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"590b32958f2874cdbd3123af8299fd5f67da311b75ddb7506bb5b70c005ddd5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-575f6df57b-k24hn" Jan 14 01:12:26.979933 kubelet[4014]: E0114 01:12:26.979702 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-575f6df57b-k24hn_calico-system(2787023d-e18f-4684-a5a9-b7a3e47eb555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-575f6df57b-k24hn_calico-system(2787023d-e18f-4684-a5a9-b7a3e47eb555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"590b32958f2874cdbd3123af8299fd5f67da311b75ddb7506bb5b70c005ddd5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-575f6df57b-k24hn" podUID="2787023d-e18f-4684-a5a9-b7a3e47eb555" Jan 14 01:12:26.985337 containerd[2498]: time="2026-01-14T01:12:26.985292411Z" level=error msg="Failed to destroy network for sandbox \"596196f10b76ae4c0cad2a765f60632d8404cf3ddf92cf706901b17e12b336b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.992258 containerd[2498]: time="2026-01-14T01:12:26.992225201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-cl2ls,Uid:8612edc9-7707-465b-bd59-44c1e0af599e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"596196f10b76ae4c0cad2a765f60632d8404cf3ddf92cf706901b17e12b336b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.992417 kubelet[4014]: E0114 01:12:26.992384 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"596196f10b76ae4c0cad2a765f60632d8404cf3ddf92cf706901b17e12b336b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:26.992489 kubelet[4014]: E0114 01:12:26.992469 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"596196f10b76ae4c0cad2a765f60632d8404cf3ddf92cf706901b17e12b336b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" Jan 14 01:12:26.992532 kubelet[4014]: E0114 01:12:26.992495 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"596196f10b76ae4c0cad2a765f60632d8404cf3ddf92cf706901b17e12b336b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" Jan 14 01:12:26.992574 kubelet[4014]: E0114 01:12:26.992555 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78d8c97c7f-cl2ls_calico-apiserver(8612edc9-7707-465b-bd59-44c1e0af599e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78d8c97c7f-cl2ls_calico-apiserver(8612edc9-7707-465b-bd59-44c1e0af599e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"596196f10b76ae4c0cad2a765f60632d8404cf3ddf92cf706901b17e12b336b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:12:27.322367 systemd[1]: Created slice kubepods-besteffort-poda24a17a9_73d6_4ce8_b8ef_5be32d60ba56.slice - libcontainer container kubepods-besteffort-poda24a17a9_73d6_4ce8_b8ef_5be32d60ba56.slice. Jan 14 01:12:27.324950 containerd[2498]: time="2026-01-14T01:12:27.324915821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96vlw,Uid:a24a17a9-73d6-4ce8-b8ef-5be32d60ba56,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:27.371751 containerd[2498]: time="2026-01-14T01:12:27.371719172Z" level=error msg="Failed to destroy network for sandbox \"ae985b9e0ab84df6f3c2c3d7ae6b950360af3028bf3295cb29b7ab7b8684b2e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:27.377334 containerd[2498]: time="2026-01-14T01:12:27.377294242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96vlw,Uid:a24a17a9-73d6-4ce8-b8ef-5be32d60ba56,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae985b9e0ab84df6f3c2c3d7ae6b950360af3028bf3295cb29b7ab7b8684b2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:27.377558 kubelet[4014]: E0114 01:12:27.377512 4014 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae985b9e0ab84df6f3c2c3d7ae6b950360af3028bf3295cb29b7ab7b8684b2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:12:27.377622 kubelet[4014]: E0114 01:12:27.377583 4014 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae985b9e0ab84df6f3c2c3d7ae6b950360af3028bf3295cb29b7ab7b8684b2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-96vlw" Jan 14 01:12:27.377659 kubelet[4014]: E0114 01:12:27.377617 4014 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae985b9e0ab84df6f3c2c3d7ae6b950360af3028bf3295cb29b7ab7b8684b2e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-96vlw" Jan 14 01:12:27.377707 kubelet[4014]: E0114 01:12:27.377670 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae985b9e0ab84df6f3c2c3d7ae6b950360af3028bf3295cb29b7ab7b8684b2e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:27.417262 containerd[2498]: time="2026-01-14T01:12:27.417231089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:12:27.601904 systemd[1]: run-netns-cni\x2db4ef143e\x2d09f1\x2d0512\x2df09b\x2dcc88251c8b6a.mount: Deactivated successfully. Jan 14 01:12:29.642450 kubelet[4014]: I0114 01:12:29.642052 4014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:12:29.678004 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:12:29.678132 kernel: audit: type=1325 audit(1768353149.675:606): table=filter:122 family=2 entries=21 op=nft_register_rule pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.675000 audit[5023]: NETFILTER_CFG table=filter:122 family=2 entries=21 op=nft_register_rule pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.690500 kernel: audit: type=1300 audit(1768353149.675:606): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe6108b860 a2=0 a3=7ffe6108b84c items=0 ppid=4120 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.675000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe6108b860 a2=0 a3=7ffe6108b84c items=0 ppid=4120 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:29.695990 kernel: audit: type=1327 audit(1768353149.675:606): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:29.695000 audit[5023]: NETFILTER_CFG table=nat:123 family=2 entries=19 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.695000 audit[5023]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe6108b860 a2=0 a3=7ffe6108b84c items=0 ppid=4120 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.702956 kernel: audit: type=1325 audit(1768353149.695:607): table=nat:123 family=2 entries=19 op=nft_register_chain pid=5023 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:29.703020 kernel: audit: type=1300 audit(1768353149.695:607): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe6108b860 a2=0 a3=7ffe6108b84c items=0 ppid=4120 pid=5023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:29.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:29.705674 kernel: audit: type=1327 audit(1768353149.695:607): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:34.344939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2563912893.mount: Deactivated successfully. Jan 14 01:12:34.378035 containerd[2498]: time="2026-01-14T01:12:34.377990807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:34.380531 containerd[2498]: time="2026-01-14T01:12:34.380496002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:12:34.385051 containerd[2498]: time="2026-01-14T01:12:34.384908125Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:34.388223 containerd[2498]: time="2026-01-14T01:12:34.388178526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:12:34.388619 containerd[2498]: time="2026-01-14T01:12:34.388463552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.971074843s" Jan 14 01:12:34.388619 containerd[2498]: time="2026-01-14T01:12:34.388493371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:12:34.407179 containerd[2498]: time="2026-01-14T01:12:34.407143433Z" level=info msg="CreateContainer within sandbox \"1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:12:34.441260 containerd[2498]: time="2026-01-14T01:12:34.441231749Z" level=info msg="Container e4f0f1125028b496ed62e84b21f2c1387ee68a0354e02fbf5a219a39459d5551: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:34.446160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3976999455.mount: Deactivated successfully. Jan 14 01:12:34.459572 containerd[2498]: time="2026-01-14T01:12:34.459543500Z" level=info msg="CreateContainer within sandbox \"1c00d5a0e9b8032ff8ad891ed38ad335e3d4c89211cc42aa3b4fa965dda0e3af\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e4f0f1125028b496ed62e84b21f2c1387ee68a0354e02fbf5a219a39459d5551\"" Jan 14 01:12:34.460068 containerd[2498]: time="2026-01-14T01:12:34.459953177Z" level=info msg="StartContainer for \"e4f0f1125028b496ed62e84b21f2c1387ee68a0354e02fbf5a219a39459d5551\"" Jan 14 01:12:34.461676 containerd[2498]: time="2026-01-14T01:12:34.461624726Z" level=info msg="connecting to shim e4f0f1125028b496ed62e84b21f2c1387ee68a0354e02fbf5a219a39459d5551" address="unix:///run/containerd/s/76f98005099cb3c0d9534537ea906be044b4d5f8032d5891f47e90ed3e8a4380" protocol=ttrpc version=3 Jan 14 01:12:34.483135 systemd[1]: Started cri-containerd-e4f0f1125028b496ed62e84b21f2c1387ee68a0354e02fbf5a219a39459d5551.scope - libcontainer container e4f0f1125028b496ed62e84b21f2c1387ee68a0354e02fbf5a219a39459d5551. Jan 14 01:12:34.547000 audit: BPF prog-id=196 op=LOAD Jan 14 01:12:34.547000 audit[5030]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4506 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:34.554901 kernel: audit: type=1334 audit(1768353154.547:608): prog-id=196 op=LOAD Jan 14 01:12:34.554950 kernel: audit: type=1300 audit(1768353154.547:608): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4506 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:34.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663066313132353032386234393665643632653834623231663263 Jan 14 01:12:34.561156 kernel: audit: type=1327 audit(1768353154.547:608): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663066313132353032386234393665643632653834623231663263 Jan 14 01:12:34.547000 audit: BPF prog-id=197 op=LOAD Jan 14 01:12:34.564040 kernel: audit: type=1334 audit(1768353154.547:609): prog-id=197 op=LOAD Jan 14 01:12:34.547000 audit[5030]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4506 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:34.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663066313132353032386234393665643632653834623231663263 Jan 14 01:12:34.547000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:12:34.547000 audit[5030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:34.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663066313132353032386234393665643632653834623231663263 Jan 14 01:12:34.547000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:12:34.547000 audit[5030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:34.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663066313132353032386234393665643632653834623231663263 Jan 14 01:12:34.547000 audit: BPF prog-id=198 op=LOAD Jan 14 01:12:34.547000 audit[5030]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4506 pid=5030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:34.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534663066313132353032386234393665643632653834623231663263 Jan 14 01:12:34.589260 containerd[2498]: time="2026-01-14T01:12:34.589164858Z" level=info msg="StartContainer for \"e4f0f1125028b496ed62e84b21f2c1387ee68a0354e02fbf5a219a39459d5551\" returns successfully" Jan 14 01:12:34.901695 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:12:34.901823 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:12:35.162127 kubelet[4014]: I0114 01:12:35.162023 4014 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbtxc\" (UniqueName: \"kubernetes.io/projected/2787023d-e18f-4684-a5a9-b7a3e47eb555-kube-api-access-wbtxc\") pod \"2787023d-e18f-4684-a5a9-b7a3e47eb555\" (UID: \"2787023d-e18f-4684-a5a9-b7a3e47eb555\") " Jan 14 01:12:35.162127 kubelet[4014]: I0114 01:12:35.162086 4014 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-ca-bundle\") pod \"2787023d-e18f-4684-a5a9-b7a3e47eb555\" (UID: \"2787023d-e18f-4684-a5a9-b7a3e47eb555\") " Jan 14 01:12:35.162127 kubelet[4014]: I0114 01:12:35.162113 4014 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-backend-key-pair\") pod \"2787023d-e18f-4684-a5a9-b7a3e47eb555\" (UID: \"2787023d-e18f-4684-a5a9-b7a3e47eb555\") " Jan 14 01:12:35.165295 kubelet[4014]: I0114 01:12:35.165252 4014 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2787023d-e18f-4684-a5a9-b7a3e47eb555" (UID: "2787023d-e18f-4684-a5a9-b7a3e47eb555"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:12:35.167042 kubelet[4014]: I0114 01:12:35.166459 4014 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2787023d-e18f-4684-a5a9-b7a3e47eb555-kube-api-access-wbtxc" (OuterVolumeSpecName: "kube-api-access-wbtxc") pod "2787023d-e18f-4684-a5a9-b7a3e47eb555" (UID: "2787023d-e18f-4684-a5a9-b7a3e47eb555"). InnerVolumeSpecName "kube-api-access-wbtxc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:12:35.167042 kubelet[4014]: I0114 01:12:35.166639 4014 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2787023d-e18f-4684-a5a9-b7a3e47eb555" (UID: "2787023d-e18f-4684-a5a9-b7a3e47eb555"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:12:35.263031 kubelet[4014]: I0114 01:12:35.262995 4014 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbtxc\" (UniqueName: \"kubernetes.io/projected/2787023d-e18f-4684-a5a9-b7a3e47eb555-kube-api-access-wbtxc\") on node \"ci-4578.0.0-p-4dd79cf71d\" DevicePath \"\"" Jan 14 01:12:35.263031 kubelet[4014]: I0114 01:12:35.263031 4014 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-ca-bundle\") on node \"ci-4578.0.0-p-4dd79cf71d\" DevicePath \"\"" Jan 14 01:12:35.263031 kubelet[4014]: I0114 01:12:35.263041 4014 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2787023d-e18f-4684-a5a9-b7a3e47eb555-whisker-backend-key-pair\") on node \"ci-4578.0.0-p-4dd79cf71d\" DevicePath \"\"" Jan 14 01:12:35.345915 systemd[1]: var-lib-kubelet-pods-2787023d\x2de18f\x2d4684\x2da5a9\x2db7a3e47eb555-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbtxc.mount: Deactivated successfully. Jan 14 01:12:35.346325 systemd[1]: var-lib-kubelet-pods-2787023d\x2de18f\x2d4684\x2da5a9\x2db7a3e47eb555-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:12:35.443904 systemd[1]: Removed slice kubepods-besteffort-pod2787023d_e18f_4684_a5a9_b7a3e47eb555.slice - libcontainer container kubepods-besteffort-pod2787023d_e18f_4684_a5a9_b7a3e47eb555.slice. Jan 14 01:12:35.484273 kubelet[4014]: I0114 01:12:35.483686 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t7zt4" podStartSLOduration=2.359455448 podStartE2EDuration="21.483667746s" podCreationTimestamp="2026-01-14 01:12:14 +0000 UTC" firstStartedPulling="2026-01-14 01:12:15.264997831 +0000 UTC m=+21.053622495" lastFinishedPulling="2026-01-14 01:12:34.389210136 +0000 UTC m=+40.177834793" observedRunningTime="2026-01-14 01:12:35.483117368 +0000 UTC m=+41.271742037" watchObservedRunningTime="2026-01-14 01:12:35.483667746 +0000 UTC m=+41.272292438" Jan 14 01:12:35.783372 systemd[1]: Created slice kubepods-besteffort-pod4aa450fa_397e_4bd9_b82d_45d9b129db7d.slice - libcontainer container kubepods-besteffort-pod4aa450fa_397e_4bd9_b82d_45d9b129db7d.slice. Jan 14 01:12:35.868646 kubelet[4014]: I0114 01:12:35.868603 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xnn\" (UniqueName: \"kubernetes.io/projected/4aa450fa-397e-4bd9-b82d-45d9b129db7d-kube-api-access-f7xnn\") pod \"whisker-69869bddb6-9f5bh\" (UID: \"4aa450fa-397e-4bd9-b82d-45d9b129db7d\") " pod="calico-system/whisker-69869bddb6-9f5bh" Jan 14 01:12:35.868803 kubelet[4014]: I0114 01:12:35.868666 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4aa450fa-397e-4bd9-b82d-45d9b129db7d-whisker-backend-key-pair\") pod \"whisker-69869bddb6-9f5bh\" (UID: \"4aa450fa-397e-4bd9-b82d-45d9b129db7d\") " pod="calico-system/whisker-69869bddb6-9f5bh" Jan 14 01:12:35.868803 kubelet[4014]: I0114 01:12:35.868689 4014 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa450fa-397e-4bd9-b82d-45d9b129db7d-whisker-ca-bundle\") pod \"whisker-69869bddb6-9f5bh\" (UID: \"4aa450fa-397e-4bd9-b82d-45d9b129db7d\") " pod="calico-system/whisker-69869bddb6-9f5bh" Jan 14 01:12:36.086164 containerd[2498]: time="2026-01-14T01:12:36.086047712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69869bddb6-9f5bh,Uid:4aa450fa-397e-4bd9-b82d-45d9b129db7d,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:36.206875 systemd-networkd[2117]: cali1a27a6a5e31: Link UP Jan 14 01:12:36.207870 systemd-networkd[2117]: cali1a27a6a5e31: Gained carrier Jan 14 01:12:36.255711 containerd[2498]: 2026-01-14 01:12:36.113 [INFO][5097] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:12:36.255711 containerd[2498]: 2026-01-14 01:12:36.121 [INFO][5097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0 whisker-69869bddb6- calico-system 4aa450fa-397e-4bd9-b82d-45d9b129db7d 926 0 2026-01-14 01:12:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69869bddb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d whisker-69869bddb6-9f5bh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1a27a6a5e31 [] [] }} ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-" Jan 14 01:12:36.255711 containerd[2498]: 2026-01-14 01:12:36.121 [INFO][5097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" Jan 14 01:12:36.255711 containerd[2498]: 2026-01-14 01:12:36.142 [INFO][5108] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" HandleID="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.142 [INFO][5108] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" HandleID="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"whisker-69869bddb6-9f5bh", "timestamp":"2026-01-14 01:12:36.142330118 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.142 [INFO][5108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.142 [INFO][5108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.142 [INFO][5108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.147 [INFO][5108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.150 [INFO][5108] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.154 [INFO][5108] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.155 [INFO][5108] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256003 containerd[2498]: 2026-01-14 01:12:36.157 [INFO][5108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256278 containerd[2498]: 2026-01-14 01:12:36.158 [INFO][5108] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256278 containerd[2498]: 2026-01-14 01:12:36.159 [INFO][5108] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f Jan 14 01:12:36.256278 containerd[2498]: 2026-01-14 01:12:36.164 [INFO][5108] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256278 containerd[2498]: 2026-01-14 01:12:36.169 [INFO][5108] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.129/26] block=192.168.125.128/26 handle="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256278 containerd[2498]: 2026-01-14 01:12:36.169 [INFO][5108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.129/26] handle="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:36.256278 containerd[2498]: 2026-01-14 01:12:36.169 [INFO][5108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:36.256278 containerd[2498]: 2026-01-14 01:12:36.169 [INFO][5108] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.129/26] IPv6=[] ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" HandleID="k8s-pod-network.cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" Jan 14 01:12:36.256476 containerd[2498]: 2026-01-14 01:12:36.172 [INFO][5097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0", GenerateName:"whisker-69869bddb6-", Namespace:"calico-system", SelfLink:"", UID:"4aa450fa-397e-4bd9-b82d-45d9b129db7d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69869bddb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"whisker-69869bddb6-9f5bh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a27a6a5e31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:36.256476 containerd[2498]: 2026-01-14 01:12:36.172 [INFO][5097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.129/32] ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" Jan 14 01:12:36.256582 containerd[2498]: 2026-01-14 01:12:36.172 [INFO][5097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a27a6a5e31 ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" Jan 14 01:12:36.256582 containerd[2498]: 2026-01-14 01:12:36.208 [INFO][5097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" Jan 14 01:12:36.256655 containerd[2498]: 2026-01-14 01:12:36.209 [INFO][5097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0", GenerateName:"whisker-69869bddb6-", Namespace:"calico-system", SelfLink:"", UID:"4aa450fa-397e-4bd9-b82d-45d9b129db7d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69869bddb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f", Pod:"whisker-69869bddb6-9f5bh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1a27a6a5e31", MAC:"16:b8:63:52:cf:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:36.256738 containerd[2498]: 2026-01-14 01:12:36.253 [INFO][5097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" Namespace="calico-system" Pod="whisker-69869bddb6-9f5bh" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-whisker--69869bddb6--9f5bh-eth0" Jan 14 01:12:36.293359 containerd[2498]: time="2026-01-14T01:12:36.293316712Z" level=info msg="connecting to shim cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f" address="unix:///run/containerd/s/869f99ea7aa95a0d5b43746855948522acca107a3af4d92e775bba6142cec6e2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:36.316161 systemd[1]: Started cri-containerd-cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f.scope - libcontainer container cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f. Jan 14 01:12:36.320466 kubelet[4014]: I0114 01:12:36.320415 4014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2787023d-e18f-4684-a5a9-b7a3e47eb555" path="/var/lib/kubelet/pods/2787023d-e18f-4684-a5a9-b7a3e47eb555/volumes" Jan 14 01:12:36.325000 audit: BPF prog-id=199 op=LOAD Jan 14 01:12:36.329296 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 01:12:36.329363 kernel: audit: type=1334 audit(1768353156.325:613): prog-id=199 op=LOAD Jan 14 01:12:36.330802 kernel: audit: type=1334 audit(1768353156.328:614): prog-id=200 op=LOAD Jan 14 01:12:36.328000 audit: BPF prog-id=200 op=LOAD Jan 14 01:12:36.328000 audit[5142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.336405 kernel: audit: type=1300 audit(1768353156.328:614): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.348935 kernel: audit: type=1327 audit(1768353156.328:614): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.349008 kernel: audit: type=1334 audit(1768353156.328:615): prog-id=200 op=UNLOAD Jan 14 01:12:36.328000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:12:36.358267 kernel: audit: type=1300 audit(1768353156.328:615): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.328000 audit[5142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.368640 kernel: audit: type=1327 audit(1768353156.328:615): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.368699 kernel: audit: type=1334 audit(1768353156.328:616): prog-id=201 op=LOAD Jan 14 01:12:36.328000 audit: BPF prog-id=201 op=LOAD Jan 14 01:12:36.328000 audit[5142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.374364 kernel: audit: type=1300 audit(1768353156.328:616): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.383415 kernel: audit: type=1327 audit(1768353156.328:616): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.328000 audit: BPF prog-id=202 op=LOAD Jan 14 01:12:36.328000 audit[5142]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.328000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:12:36.328000 audit[5142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.328000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:12:36.328000 audit[5142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.328000 audit: BPF prog-id=203 op=LOAD Jan 14 01:12:36.328000 audit[5142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5131 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6363383861366538623432336636346462373738353435643065376233 Jan 14 01:12:36.393167 containerd[2498]: time="2026-01-14T01:12:36.393088832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69869bddb6-9f5bh,Uid:4aa450fa-397e-4bd9-b82d-45d9b129db7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc88a6e8b423f64db778545d0e7b3c91e089d395ab3455b078aed1dba131f24f\"" Jan 14 01:12:36.394356 containerd[2498]: time="2026-01-14T01:12:36.394332826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:12:36.654351 containerd[2498]: time="2026-01-14T01:12:36.654201379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:36.657579 containerd[2498]: time="2026-01-14T01:12:36.657144741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:12:36.657579 containerd[2498]: time="2026-01-14T01:12:36.657169065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:36.657852 kubelet[4014]: E0114 01:12:36.657824 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:36.658174 kubelet[4014]: E0114 01:12:36.657964 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:36.658218 kubelet[4014]: E0114 01:12:36.658136 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b08deebfb5504960b33ee107ab7f4f73,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:36.662265 containerd[2498]: time="2026-01-14T01:12:36.662242813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:12:36.815000 audit: BPF prog-id=204 op=LOAD Jan 14 01:12:36.815000 audit[5266]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1a612dd0 a2=98 a3=1fffffffffffffff items=0 ppid=5218 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:36.815000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:12:36.815000 audit[5266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe1a612da0 a3=0 items=0 ppid=5218 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:36.815000 audit: BPF prog-id=205 op=LOAD Jan 14 01:12:36.815000 audit[5266]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1a612cb0 a2=94 a3=3 items=0 ppid=5218 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:36.815000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:12:36.815000 audit[5266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1a612cb0 a2=94 a3=3 items=0 ppid=5218 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:36.815000 audit: BPF prog-id=206 op=LOAD Jan 14 01:12:36.815000 audit[5266]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe1a612cf0 a2=94 a3=7ffe1a612ed0 items=0 ppid=5218 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:36.815000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:12:36.815000 audit[5266]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe1a612cf0 a2=94 a3=7ffe1a612ed0 items=0 ppid=5218 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.815000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:12:36.817000 audit: BPF prog-id=207 op=LOAD Jan 14 01:12:36.817000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd1305f0d0 a2=98 a3=3 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.817000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:12:36.817000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd1305f0a0 a3=0 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.817000 audit: BPF prog-id=208 op=LOAD Jan 14 01:12:36.817000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd1305eec0 a2=94 a3=54428f items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.817000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:12:36.817000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd1305eec0 a2=94 a3=54428f items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.817000 audit: BPF prog-id=209 op=LOAD Jan 14 01:12:36.817000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd1305eef0 a2=94 a3=2 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.817000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:12:36.817000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd1305eef0 a2=0 a3=2 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.817000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.939186 containerd[2498]: time="2026-01-14T01:12:36.939063648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:36.943293 containerd[2498]: time="2026-01-14T01:12:36.943196627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:12:36.943293 containerd[2498]: time="2026-01-14T01:12:36.943239123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:36.943569 kubelet[4014]: E0114 01:12:36.943537 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:36.943681 kubelet[4014]: E0114 01:12:36.943664 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:36.944150 kubelet[4014]: E0114 01:12:36.944117 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:36.945537 kubelet[4014]: E0114 01:12:36.945479 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:12:36.971000 audit: BPF prog-id=210 op=LOAD Jan 14 01:12:36.971000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd1305edb0 a2=94 a3=1 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.971000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.971000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:12:36.971000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd1305edb0 a2=94 a3=1 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.971000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.981000 audit: BPF prog-id=211 op=LOAD Jan 14 01:12:36.981000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd1305eda0 a2=94 a3=4 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.981000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.981000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:12:36.981000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd1305eda0 a2=0 a3=4 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.981000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=212 op=LOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd1305ec00 a2=94 a3=5 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd1305ec00 a2=0 a3=5 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=213 op=LOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd1305ee20 a2=94 a3=6 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd1305ee20 a2=0 a3=6 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=214 op=LOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd1305e5d0 a2=94 a3=88 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=215 op=LOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd1305e450 a2=94 a3=2 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd1305e480 a2=0 a3=7ffd1305e580 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.982000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:12:36.982000 audit[5267]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3369bd10 a2=0 a3=fc89f0cea4b9dac5 items=0 ppid=5218 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.982000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:12:36.990000 audit: BPF prog-id=216 op=LOAD Jan 14 01:12:36.990000 audit[5290]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffaffe6970 a2=98 a3=1999999999999999 items=0 ppid=5218 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:36.990000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:12:36.990000 audit[5290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffaffe6940 a3=0 items=0 ppid=5218 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:36.990000 audit: BPF prog-id=217 op=LOAD Jan 14 01:12:36.990000 audit[5290]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffaffe6850 a2=94 a3=ffff items=0 ppid=5218 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:36.990000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:12:36.990000 audit[5290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffaffe6850 a2=94 a3=ffff items=0 ppid=5218 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:36.990000 audit: BPF prog-id=218 op=LOAD Jan 14 01:12:36.990000 audit[5290]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffaffe6890 a2=94 a3=7fffaffe6a70 items=0 ppid=5218 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:36.990000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:12:36.990000 audit[5290]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffaffe6890 a2=94 a3=7fffaffe6a70 items=0 ppid=5218 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:36.990000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:12:37.094407 systemd-networkd[2117]: vxlan.calico: Link UP Jan 14 01:12:37.094414 systemd-networkd[2117]: vxlan.calico: Gained carrier Jan 14 01:12:37.113000 audit: BPF prog-id=219 op=LOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0fb068a0 a2=98 a3=0 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=219 op=UNLOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff0fb06870 a3=0 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=220 op=LOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0fb066b0 a2=94 a3=54428f items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff0fb066b0 a2=94 a3=54428f items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=221 op=LOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0fb066e0 a2=94 a3=2 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff0fb066e0 a2=0 a3=2 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=222 op=LOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0fb06490 a2=94 a3=4 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0fb06490 a2=94 a3=4 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=223 op=LOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0fb06590 a2=94 a3=7fff0fb06710 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.113000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:12:37.113000 audit[5316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0fb06590 a2=0 a3=7fff0fb06710 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.113000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.114000 audit: BPF prog-id=224 op=LOAD Jan 14 01:12:37.114000 audit[5316]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0fb05cc0 a2=94 a3=2 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.114000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:12:37.114000 audit[5316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0fb05cc0 a2=0 a3=2 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.114000 audit: BPF prog-id=225 op=LOAD Jan 14 01:12:37.114000 audit[5316]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0fb05dc0 a2=94 a3=30 items=0 ppid=5218 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.114000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:12:37.125000 audit: BPF prog-id=226 op=LOAD Jan 14 01:12:37.125000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc089ab140 a2=98 a3=0 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.125000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:12:37.125000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc089ab110 a3=0 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.125000 audit: BPF prog-id=227 op=LOAD Jan 14 01:12:37.125000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc089aaf30 a2=94 a3=54428f items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.125000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:12:37.125000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc089aaf30 a2=94 a3=54428f items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.125000 audit: BPF prog-id=228 op=LOAD Jan 14 01:12:37.125000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc089aaf60 a2=94 a3=2 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.125000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:12:37.125000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc089aaf60 a2=0 a3=2 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.125000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.253000 audit: BPF prog-id=229 op=LOAD Jan 14 01:12:37.253000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc089aae20 a2=94 a3=1 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.253000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:12:37.253000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc089aae20 a2=94 a3=1 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.253000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.263000 audit: BPF prog-id=230 op=LOAD Jan 14 01:12:37.263000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc089aae10 a2=94 a3=4 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.263000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:12:37.263000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc089aae10 a2=0 a3=4 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.263000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.264000 audit: BPF prog-id=231 op=LOAD Jan 14 01:12:37.264000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc089aac70 a2=94 a3=5 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.264000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:12:37.264000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc089aac70 a2=0 a3=5 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.264000 audit: BPF prog-id=232 op=LOAD Jan 14 01:12:37.264000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc089aae90 a2=94 a3=6 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.264000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:12:37.264000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc089aae90 a2=0 a3=6 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.264000 audit: BPF prog-id=233 op=LOAD Jan 14 01:12:37.264000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc089aa640 a2=94 a3=88 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.264000 audit: BPF prog-id=234 op=LOAD Jan 14 01:12:37.264000 audit[5321]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc089aa4c0 a2=94 a3=2 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.264000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:12:37.264000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc089aa4f0 a2=0 a3=7ffc089aa5f0 items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.264000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.265000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:12:37.265000 audit[5321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=c934d10 a2=0 a3=20a5300cf4c5ea9c items=0 ppid=5218 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.265000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:12:37.268000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:12:37.268000 audit[5218]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000915280 a2=0 a3=0 items=0 ppid=5174 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.268000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:12:37.318440 containerd[2498]: time="2026-01-14T01:12:37.318407596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cxc,Uid:95e43c77-2bc5-456f-aaea-ff54d5c19984,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:37.388000 audit[5359]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:37.388000 audit[5359]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe8eaddaa0 a2=0 a3=7ffe8eadda8c items=0 ppid=5218 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.388000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:37.398000 audit[5361]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=5361 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:37.398000 audit[5361]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe6d8079c0 a2=0 a3=7ffe6d8079ac items=0 ppid=5218 pid=5361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.398000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:37.440076 systemd-networkd[2117]: cali372dd19b344: Link UP Jan 14 01:12:37.422000 audit[5364]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=5364 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:37.422000 audit[5364]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fffec405750 a2=0 a3=7fffec40573c items=0 ppid=5218 pid=5364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.422000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:37.441718 systemd-networkd[2117]: cali372dd19b344: Gained carrier Jan 14 01:12:37.449378 kubelet[4014]: E0114 01:12:37.449339 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:12:37.467000 audit[5360]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=5360 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:37.467000 audit[5360]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd753f5e40 a2=0 a3=7ffd753f5e2c items=0 ppid=5218 pid=5360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.467000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:37.474728 containerd[2498]: 2026-01-14 01:12:37.366 [INFO][5336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0 coredns-674b8bbfcf- kube-system 95e43c77-2bc5-456f-aaea-ff54d5c19984 848 0 2026-01-14 01:11:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d coredns-674b8bbfcf-t9cxc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali372dd19b344 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-" Jan 14 01:12:37.474728 containerd[2498]: 2026-01-14 01:12:37.366 [INFO][5336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" Jan 14 01:12:37.474728 containerd[2498]: 2026-01-14 01:12:37.406 [INFO][5350] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" HandleID="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.406 [INFO][5350] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" HandleID="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5890), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"coredns-674b8bbfcf-t9cxc", "timestamp":"2026-01-14 01:12:37.406703815 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.406 [INFO][5350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.406 [INFO][5350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.407 [INFO][5350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.411 [INFO][5350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.414 [INFO][5350] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.417 [INFO][5350] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.419 [INFO][5350] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475221 containerd[2498]: 2026-01-14 01:12:37.421 [INFO][5350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475949 containerd[2498]: 2026-01-14 01:12:37.421 [INFO][5350] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475949 containerd[2498]: 2026-01-14 01:12:37.423 [INFO][5350] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b Jan 14 01:12:37.475949 containerd[2498]: 2026-01-14 01:12:37.427 [INFO][5350] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475949 containerd[2498]: 2026-01-14 01:12:37.435 [INFO][5350] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.130/26] block=192.168.125.128/26 handle="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475949 containerd[2498]: 2026-01-14 01:12:37.435 [INFO][5350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.130/26] handle="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:37.475949 containerd[2498]: 2026-01-14 01:12:37.435 [INFO][5350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:37.475949 containerd[2498]: 2026-01-14 01:12:37.435 [INFO][5350] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.130/26] IPv6=[] ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" HandleID="k8s-pod-network.95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" Jan 14 01:12:37.476354 containerd[2498]: 2026-01-14 01:12:37.438 [INFO][5336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95e43c77-2bc5-456f-aaea-ff54d5c19984", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"coredns-674b8bbfcf-t9cxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali372dd19b344", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:37.476354 containerd[2498]: 2026-01-14 01:12:37.438 [INFO][5336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.130/32] ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" Jan 14 01:12:37.476354 containerd[2498]: 2026-01-14 01:12:37.438 [INFO][5336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali372dd19b344 ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" Jan 14 01:12:37.476354 containerd[2498]: 2026-01-14 01:12:37.441 [INFO][5336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" Jan 14 01:12:37.476354 containerd[2498]: 2026-01-14 01:12:37.442 [INFO][5336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"95e43c77-2bc5-456f-aaea-ff54d5c19984", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b", Pod:"coredns-674b8bbfcf-t9cxc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali372dd19b344", MAC:"fa:90:8f:67:b4:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:37.476354 containerd[2498]: 2026-01-14 01:12:37.472 [INFO][5336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cxc" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--t9cxc-eth0" Jan 14 01:12:37.491000 audit[5382]: NETFILTER_CFG table=filter:128 family=2 entries=42 op=nft_register_chain pid=5382 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:37.491000 audit[5382]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffe63743500 a2=0 a3=7ffe637434ec items=0 ppid=5218 pid=5382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.491000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:37.518935 containerd[2498]: time="2026-01-14T01:12:37.518429064Z" level=info msg="connecting to shim 95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b" address="unix:///run/containerd/s/d81a5355c6675c63830fb9fc55405bfc22da3151a608210e249e43649ed58496" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:37.541361 systemd[1]: Started cri-containerd-95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b.scope - libcontainer container 95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b. Jan 14 01:12:37.555000 audit: BPF prog-id=235 op=LOAD Jan 14 01:12:37.555000 audit: BPF prog-id=236 op=LOAD Jan 14 01:12:37.555000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5392 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935363934613338643735363866306562306632653466623830373366 Jan 14 01:12:37.555000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:12:37.555000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5392 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935363934613338643735363866306562306632653466623830373366 Jan 14 01:12:37.555000 audit: BPF prog-id=237 op=LOAD Jan 14 01:12:37.555000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5392 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935363934613338643735363866306562306632653466623830373366 Jan 14 01:12:37.556000 audit: BPF prog-id=238 op=LOAD Jan 14 01:12:37.556000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5392 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935363934613338643735363866306562306632653466623830373366 Jan 14 01:12:37.556000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:12:37.556000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5392 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935363934613338643735363866306562306632653466623830373366 Jan 14 01:12:37.556000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:12:37.556000 audit[5403]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5392 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935363934613338643735363866306562306632653466623830373366 Jan 14 01:12:37.556000 audit: BPF prog-id=239 op=LOAD Jan 14 01:12:37.556000 audit[5403]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5392 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935363934613338643735363866306562306632653466623830373366 Jan 14 01:12:37.589852 containerd[2498]: time="2026-01-14T01:12:37.589814756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cxc,Uid:95e43c77-2bc5-456f-aaea-ff54d5c19984,Namespace:kube-system,Attempt:0,} returns sandbox id \"95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b\"" Jan 14 01:12:37.598937 containerd[2498]: time="2026-01-14T01:12:37.598911209Z" level=info msg="CreateContainer within sandbox \"95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:12:37.617000 audit[5432]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=5432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:37.617000 audit[5432]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffefd12d320 a2=0 a3=7ffefd12d30c items=0 ppid=4120 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:37.622132 containerd[2498]: time="2026-01-14T01:12:37.621271929Z" level=info msg="Container 9eb7cf73f2c8098ba0ba5d896d613d132d2972b99b93232a3759d7b769442c79: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:37.621000 audit[5432]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=5432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:37.621000 audit[5432]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffefd12d320 a2=0 a3=0 items=0 ppid=4120 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:37.638096 containerd[2498]: time="2026-01-14T01:12:37.638070694Z" level=info msg="CreateContainer within sandbox \"95694a38d7568f0eb0f2e4fb8073f3f4eaa9999415ebd82e8e79256d6e05193b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9eb7cf73f2c8098ba0ba5d896d613d132d2972b99b93232a3759d7b769442c79\"" Jan 14 01:12:37.638424 containerd[2498]: time="2026-01-14T01:12:37.638401865Z" level=info msg="StartContainer for \"9eb7cf73f2c8098ba0ba5d896d613d132d2972b99b93232a3759d7b769442c79\"" Jan 14 01:12:37.639147 containerd[2498]: time="2026-01-14T01:12:37.639117154Z" level=info msg="connecting to shim 9eb7cf73f2c8098ba0ba5d896d613d132d2972b99b93232a3759d7b769442c79" address="unix:///run/containerd/s/d81a5355c6675c63830fb9fc55405bfc22da3151a608210e249e43649ed58496" protocol=ttrpc version=3 Jan 14 01:12:37.659150 systemd[1]: Started cri-containerd-9eb7cf73f2c8098ba0ba5d896d613d132d2972b99b93232a3759d7b769442c79.scope - libcontainer container 9eb7cf73f2c8098ba0ba5d896d613d132d2972b99b93232a3759d7b769442c79. Jan 14 01:12:37.667000 audit: BPF prog-id=240 op=LOAD Jan 14 01:12:37.667000 audit: BPF prog-id=241 op=LOAD Jan 14 01:12:37.667000 audit[5433]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5392 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623763663733663263383039386261306261356438393664363133 Jan 14 01:12:37.667000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:12:37.667000 audit[5433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5392 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623763663733663263383039386261306261356438393664363133 Jan 14 01:12:37.667000 audit: BPF prog-id=242 op=LOAD Jan 14 01:12:37.667000 audit[5433]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5392 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623763663733663263383039386261306261356438393664363133 Jan 14 01:12:37.667000 audit: BPF prog-id=243 op=LOAD Jan 14 01:12:37.667000 audit[5433]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5392 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623763663733663263383039386261306261356438393664363133 Jan 14 01:12:37.667000 audit: BPF prog-id=243 op=UNLOAD Jan 14 01:12:37.667000 audit[5433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5392 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623763663733663263383039386261306261356438393664363133 Jan 14 01:12:37.667000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:12:37.667000 audit[5433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5392 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623763663733663263383039386261306261356438393664363133 Jan 14 01:12:37.668000 audit: BPF prog-id=244 op=LOAD Jan 14 01:12:37.668000 audit[5433]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5392 pid=5433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:37.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623763663733663263383039386261306261356438393664363133 Jan 14 01:12:37.688712 containerd[2498]: time="2026-01-14T01:12:37.688675164Z" level=info msg="StartContainer for \"9eb7cf73f2c8098ba0ba5d896d613d132d2972b99b93232a3759d7b769442c79\" returns successfully" Jan 14 01:12:38.118124 systemd-networkd[2117]: cali1a27a6a5e31: Gained IPv6LL Jan 14 01:12:38.318007 containerd[2498]: time="2026-01-14T01:12:38.317740818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-fb6hx,Uid:7c277774-5617-4094-89b3-d4c788250cae,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:38.412968 systemd-networkd[2117]: calie5f0f77acad: Link UP Jan 14 01:12:38.414032 systemd-networkd[2117]: calie5f0f77acad: Gained carrier Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.353 [INFO][5466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0 calico-apiserver-78d8c97c7f- calico-apiserver 7c277774-5617-4094-89b3-d4c788250cae 852 0 2026-01-14 01:12:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d8c97c7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d calico-apiserver-78d8c97c7f-fb6hx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5f0f77acad [] [] }} ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.353 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.375 [INFO][5477] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" HandleID="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.375 [INFO][5477] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" HandleID="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"calico-apiserver-78d8c97c7f-fb6hx", "timestamp":"2026-01-14 01:12:38.375851145 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.376 [INFO][5477] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.376 [INFO][5477] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.376 [INFO][5477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.381 [INFO][5477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.384 [INFO][5477] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.387 [INFO][5477] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.390 [INFO][5477] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.391 [INFO][5477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.391 [INFO][5477] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.392 [INFO][5477] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.403 [INFO][5477] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.408 [INFO][5477] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.131/26] block=192.168.125.128/26 handle="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.408 [INFO][5477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.131/26] handle="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.408 [INFO][5477] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:38.428658 containerd[2498]: 2026-01-14 01:12:38.408 [INFO][5477] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.131/26] IPv6=[] ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" HandleID="k8s-pod-network.48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" Jan 14 01:12:38.429706 containerd[2498]: 2026-01-14 01:12:38.410 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0", GenerateName:"calico-apiserver-78d8c97c7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c277774-5617-4094-89b3-d4c788250cae", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8c97c7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"calico-apiserver-78d8c97c7f-fb6hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5f0f77acad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:38.429706 containerd[2498]: 2026-01-14 01:12:38.410 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.131/32] ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" Jan 14 01:12:38.429706 containerd[2498]: 2026-01-14 01:12:38.410 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5f0f77acad ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" Jan 14 01:12:38.429706 containerd[2498]: 2026-01-14 01:12:38.414 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" Jan 14 01:12:38.429706 containerd[2498]: 2026-01-14 01:12:38.414 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0", GenerateName:"calico-apiserver-78d8c97c7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c277774-5617-4094-89b3-d4c788250cae", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8c97c7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f", Pod:"calico-apiserver-78d8c97c7f-fb6hx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5f0f77acad", MAC:"66:32:0b:d1:4b:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:38.429706 containerd[2498]: 2026-01-14 01:12:38.425 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-fb6hx" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--fb6hx-eth0" Jan 14 01:12:38.440000 audit[5491]: NETFILTER_CFG table=filter:131 family=2 entries=54 op=nft_register_chain pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:38.440000 audit[5491]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffd9cc95000 a2=0 a3=7ffd9cc94fec items=0 ppid=5218 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.440000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:38.461421 kubelet[4014]: I0114 01:12:38.460529 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t9cxc" podStartSLOduration=40.460492768 podStartE2EDuration="40.460492768s" podCreationTimestamp="2026-01-14 01:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:12:38.460359515 +0000 UTC m=+44.248984183" watchObservedRunningTime="2026-01-14 01:12:38.460492768 +0000 UTC m=+44.249117432" Jan 14 01:12:38.478794 containerd[2498]: time="2026-01-14T01:12:38.478715422Z" level=info msg="connecting to shim 48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f" address="unix:///run/containerd/s/38796c382a327f2a26c6ff53ecbf4bbf201597e8cac791f06035fe130b5a9854" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:38.502089 systemd-networkd[2117]: cali372dd19b344: Gained IPv6LL Jan 14 01:12:38.503152 systemd[1]: Started cri-containerd-48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f.scope - libcontainer container 48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f. Jan 14 01:12:38.518000 audit: BPF prog-id=245 op=LOAD Jan 14 01:12:38.518000 audit: BPF prog-id=246 op=LOAD Jan 14 01:12:38.518000 audit[5510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5500 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636361353162633730333433333366613535373833663533356366 Jan 14 01:12:38.518000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:12:38.518000 audit[5510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636361353162633730333433333366613535373833663533356366 Jan 14 01:12:38.518000 audit: BPF prog-id=247 op=LOAD Jan 14 01:12:38.518000 audit[5510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5500 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636361353162633730333433333366613535373833663533356366 Jan 14 01:12:38.518000 audit: BPF prog-id=248 op=LOAD Jan 14 01:12:38.518000 audit[5510]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5500 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636361353162633730333433333366613535373833663533356366 Jan 14 01:12:38.518000 audit: BPF prog-id=248 op=UNLOAD Jan 14 01:12:38.518000 audit[5510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636361353162633730333433333366613535373833663533356366 Jan 14 01:12:38.519000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:12:38.519000 audit[5510]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636361353162633730333433333366613535373833663533356366 Jan 14 01:12:38.519000 audit: BPF prog-id=249 op=LOAD Jan 14 01:12:38.519000 audit[5510]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5500 pid=5510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438636361353162633730333433333366613535373833663533356366 Jan 14 01:12:38.550900 containerd[2498]: time="2026-01-14T01:12:38.550860029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-fb6hx,Uid:7c277774-5617-4094-89b3-d4c788250cae,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"48cca51bc7034333fa55783f535cf7ea3857f56c978f8046f22356fcdc66d99f\"" Jan 14 01:12:38.552228 containerd[2498]: time="2026-01-14T01:12:38.552177742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:38.574000 audit[5538]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:38.574000 audit[5538]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc36052f60 a2=0 a3=7ffc36052f4c items=0 ppid=4120 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.574000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:38.577000 audit[5538]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:38.577000 audit[5538]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc36052f60 a2=0 a3=0 items=0 ppid=4120 pid=5538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:38.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:38.826415 containerd[2498]: time="2026-01-14T01:12:38.826288516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:38.828976 containerd[2498]: time="2026-01-14T01:12:38.828942694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:38.828976 containerd[2498]: time="2026-01-14T01:12:38.828994827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:38.829197 kubelet[4014]: E0114 01:12:38.829159 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:38.829249 kubelet[4014]: E0114 01:12:38.829212 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:38.829452 kubelet[4014]: E0114 01:12:38.829388 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkpqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-fb6hx_calico-apiserver(7c277774-5617-4094-89b3-d4c788250cae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:38.830531 kubelet[4014]: E0114 01:12:38.830492 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:12:39.014153 systemd-networkd[2117]: vxlan.calico: Gained IPv6LL Jan 14 01:12:39.318434 containerd[2498]: time="2026-01-14T01:12:39.318293207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47d4s,Uid:71463554-7ede-47f5-b4f2-9e4bbfb9f8b1,Namespace:kube-system,Attempt:0,}" Jan 14 01:12:39.318434 containerd[2498]: time="2026-01-14T01:12:39.318296515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84b9c95c-shkh8,Uid:486952cc-8944-4287-a101-bc04fbfa2173,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:39.452187 kubelet[4014]: E0114 01:12:39.451731 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:12:39.476796 systemd-networkd[2117]: calic1a1a2fbf70: Link UP Jan 14 01:12:39.477366 systemd-networkd[2117]: calic1a1a2fbf70: Gained carrier Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.382 [INFO][5542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0 coredns-674b8bbfcf- kube-system 71463554-7ede-47f5-b4f2-9e4bbfb9f8b1 849 0 2026-01-14 01:11:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d coredns-674b8bbfcf-47d4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic1a1a2fbf70 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.382 [INFO][5542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.409 [INFO][5571] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" HandleID="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.409 [INFO][5571] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" HandleID="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"coredns-674b8bbfcf-47d4s", "timestamp":"2026-01-14 01:12:39.409886772 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.410 [INFO][5571] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.410 [INFO][5571] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.410 [INFO][5571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.414 [INFO][5571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.417 [INFO][5571] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.420 [INFO][5571] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.422 [INFO][5571] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.423 [INFO][5571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.423 [INFO][5571] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.424 [INFO][5571] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8 Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.431 [INFO][5571] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.465 [INFO][5571] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.132/26] block=192.168.125.128/26 handle="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.465 [INFO][5571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.132/26] handle="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.465 [INFO][5571] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:39.521863 containerd[2498]: 2026-01-14 01:12:39.465 [INFO][5571] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.132/26] IPv6=[] ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" HandleID="k8s-pod-network.fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" Jan 14 01:12:39.529817 containerd[2498]: 2026-01-14 01:12:39.471 [INFO][5542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71463554-7ede-47f5-b4f2-9e4bbfb9f8b1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"coredns-674b8bbfcf-47d4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1a1a2fbf70", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:39.529817 containerd[2498]: 2026-01-14 01:12:39.472 [INFO][5542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.132/32] ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" Jan 14 01:12:39.529817 containerd[2498]: 2026-01-14 01:12:39.472 [INFO][5542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1a1a2fbf70 ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" Jan 14 01:12:39.529817 containerd[2498]: 2026-01-14 01:12:39.478 [INFO][5542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" Jan 14 01:12:39.529817 containerd[2498]: 2026-01-14 01:12:39.478 [INFO][5542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71463554-7ede-47f5-b4f2-9e4bbfb9f8b1", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 11, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8", Pod:"coredns-674b8bbfcf-47d4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic1a1a2fbf70", MAC:"b2:ab:43:36:d5:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:39.529817 containerd[2498]: 2026-01-14 01:12:39.517 [INFO][5542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" Namespace="kube-system" Pod="coredns-674b8bbfcf-47d4s" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-coredns--674b8bbfcf--47d4s-eth0" Jan 14 01:12:39.536000 audit[5592]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=5592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:39.536000 audit[5592]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8e4c6200 a2=0 a3=7ffc8e4c61ec items=0 ppid=4120 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:39.540000 audit[5592]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=5592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:39.540000 audit[5592]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8e4c6200 a2=0 a3=7ffc8e4c61ec items=0 ppid=4120 pid=5592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:39.552000 audit[5594]: NETFILTER_CFG table=filter:136 family=2 entries=40 op=nft_register_chain pid=5594 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:39.552000 audit[5594]: SYSCALL arch=c000003e syscall=46 success=yes exit=20344 a0=3 a1=7ffdec5ab1c0 a2=0 a3=7ffdec5ab1ac items=0 ppid=5218 pid=5594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.552000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:39.578559 containerd[2498]: time="2026-01-14T01:12:39.577856154Z" level=info msg="connecting to shim fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8" address="unix:///run/containerd/s/5ac4eac14c538b05ab7bf27d1f1bdca0dd49a9054f0e0d7e0bfe7a445e1a4ce5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:39.603518 systemd-networkd[2117]: cali803ab337f8d: Link UP Jan 14 01:12:39.604197 systemd-networkd[2117]: cali803ab337f8d: Gained carrier Jan 14 01:12:39.617233 systemd[1]: Started cri-containerd-fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8.scope - libcontainer container fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8. Jan 14 01:12:39.624000 audit: BPF prog-id=250 op=LOAD Jan 14 01:12:39.624000 audit: BPF prog-id=251 op=LOAD Jan 14 01:12:39.624000 audit[5615]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5603 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623636353732323435613836366334346537353438373933623361 Jan 14 01:12:39.625000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:12:39.625000 audit[5615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5603 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623636353732323435613836366334346537353438373933623361 Jan 14 01:12:39.625000 audit: BPF prog-id=252 op=LOAD Jan 14 01:12:39.625000 audit[5615]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5603 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623636353732323435613836366334346537353438373933623361 Jan 14 01:12:39.625000 audit: BPF prog-id=253 op=LOAD Jan 14 01:12:39.625000 audit[5615]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5603 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623636353732323435613836366334346537353438373933623361 Jan 14 01:12:39.625000 audit: BPF prog-id=253 op=UNLOAD Jan 14 01:12:39.625000 audit[5615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5603 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623636353732323435613836366334346537353438373933623361 Jan 14 01:12:39.625000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:12:39.625000 audit[5615]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5603 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623636353732323435613836366334346537353438373933623361 Jan 14 01:12:39.625000 audit: BPF prog-id=254 op=LOAD Jan 14 01:12:39.625000 audit[5615]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5603 pid=5615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662623636353732323435613836366334346537353438373933623361 Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.382 [INFO][5546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0 calico-kube-controllers-7c84b9c95c- calico-system 486952cc-8944-4287-a101-bc04fbfa2173 851 0 2026-01-14 01:12:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c84b9c95c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d calico-kube-controllers-7c84b9c95c-shkh8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali803ab337f8d [] [] }} ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.382 [INFO][5546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.409 [INFO][5570] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" HandleID="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.409 [INFO][5570] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" HandleID="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"calico-kube-controllers-7c84b9c95c-shkh8", "timestamp":"2026-01-14 01:12:39.409768137 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.410 [INFO][5570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.465 [INFO][5570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.465 [INFO][5570] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.521 [INFO][5570] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.560 [INFO][5570] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.563 [INFO][5570] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.570 [INFO][5570] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.572 [INFO][5570] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.572 [INFO][5570] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.573 [INFO][5570] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.582 [INFO][5570] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.594 [INFO][5570] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.133/26] block=192.168.125.128/26 handle="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.595 [INFO][5570] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.133/26] handle="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.595 [INFO][5570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:39.659641 containerd[2498]: 2026-01-14 01:12:39.595 [INFO][5570] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.133/26] IPv6=[] ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" HandleID="k8s-pod-network.a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" Jan 14 01:12:39.660909 containerd[2498]: 2026-01-14 01:12:39.597 [INFO][5546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0", GenerateName:"calico-kube-controllers-7c84b9c95c-", Namespace:"calico-system", SelfLink:"", UID:"486952cc-8944-4287-a101-bc04fbfa2173", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c84b9c95c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"calico-kube-controllers-7c84b9c95c-shkh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali803ab337f8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:39.660909 containerd[2498]: 2026-01-14 01:12:39.597 [INFO][5546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.133/32] ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" Jan 14 01:12:39.660909 containerd[2498]: 2026-01-14 01:12:39.597 [INFO][5546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali803ab337f8d ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" Jan 14 01:12:39.660909 containerd[2498]: 2026-01-14 01:12:39.608 [INFO][5546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" Jan 14 01:12:39.660909 containerd[2498]: 2026-01-14 01:12:39.610 [INFO][5546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0", GenerateName:"calico-kube-controllers-7c84b9c95c-", Namespace:"calico-system", SelfLink:"", UID:"486952cc-8944-4287-a101-bc04fbfa2173", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c84b9c95c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a", Pod:"calico-kube-controllers-7c84b9c95c-shkh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali803ab337f8d", MAC:"5e:4f:23:7a:9c:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:39.660909 containerd[2498]: 2026-01-14 01:12:39.656 [INFO][5546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" Namespace="calico-system" Pod="calico-kube-controllers-7c84b9c95c-shkh8" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--kube--controllers--7c84b9c95c--shkh8-eth0" Jan 14 01:12:39.661675 containerd[2498]: time="2026-01-14T01:12:39.661601737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-47d4s,Uid:71463554-7ede-47f5-b4f2-9e4bbfb9f8b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8\"" Jan 14 01:12:39.670795 containerd[2498]: time="2026-01-14T01:12:39.670765971Z" level=info msg="CreateContainer within sandbox \"fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:12:39.672000 audit[5649]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=5649 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:39.672000 audit[5649]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffed39f0bd0 a2=0 a3=7ffed39f0bbc items=0 ppid=5218 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.672000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:39.703305 containerd[2498]: time="2026-01-14T01:12:39.701172838Z" level=info msg="Container e85f1c4d0198f61f9c4d771480395f86f32ba484955c54918634d3f1b0cb0101: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:12:39.716744 containerd[2498]: time="2026-01-14T01:12:39.716718043Z" level=info msg="CreateContainer within sandbox \"fbb66572245a866c44e7548793b3afa732d476683a19465af7d1c2a6cb0805c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e85f1c4d0198f61f9c4d771480395f86f32ba484955c54918634d3f1b0cb0101\"" Jan 14 01:12:39.718084 containerd[2498]: time="2026-01-14T01:12:39.718057548Z" level=info msg="StartContainer for \"e85f1c4d0198f61f9c4d771480395f86f32ba484955c54918634d3f1b0cb0101\"" Jan 14 01:12:39.718201 containerd[2498]: time="2026-01-14T01:12:39.718181462Z" level=info msg="connecting to shim a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a" address="unix:///run/containerd/s/61e7af80186734df2d1f383f060da17dd717ec58cc32f7f34b12901034874c69" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:39.719043 containerd[2498]: time="2026-01-14T01:12:39.719015167Z" level=info msg="connecting to shim e85f1c4d0198f61f9c4d771480395f86f32ba484955c54918634d3f1b0cb0101" address="unix:///run/containerd/s/5ac4eac14c538b05ab7bf27d1f1bdca0dd49a9054f0e0d7e0bfe7a445e1a4ce5" protocol=ttrpc version=3 Jan 14 01:12:39.745151 systemd[1]: Started cri-containerd-a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a.scope - libcontainer container a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a. Jan 14 01:12:39.746277 systemd[1]: Started cri-containerd-e85f1c4d0198f61f9c4d771480395f86f32ba484955c54918634d3f1b0cb0101.scope - libcontainer container e85f1c4d0198f61f9c4d771480395f86f32ba484955c54918634d3f1b0cb0101. Jan 14 01:12:39.756000 audit: BPF prog-id=255 op=LOAD Jan 14 01:12:39.758000 audit: BPF prog-id=256 op=LOAD Jan 14 01:12:39.758000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5658 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343462663331663166396537613431666161653636373938366238 Jan 14 01:12:39.758000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:12:39.758000 audit[5670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5658 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343462663331663166396537613431666161653636373938366238 Jan 14 01:12:39.758000 audit: BPF prog-id=257 op=LOAD Jan 14 01:12:39.758000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5658 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343462663331663166396537613431666161653636373938366238 Jan 14 01:12:39.758000 audit: BPF prog-id=258 op=LOAD Jan 14 01:12:39.758000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5658 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343462663331663166396537613431666161653636373938366238 Jan 14 01:12:39.758000 audit: BPF prog-id=258 op=UNLOAD Jan 14 01:12:39.758000 audit[5670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5658 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343462663331663166396537613431666161653636373938366238 Jan 14 01:12:39.758000 audit: BPF prog-id=257 op=UNLOAD Jan 14 01:12:39.758000 audit[5670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5658 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343462663331663166396537613431666161653636373938366238 Jan 14 01:12:39.758000 audit: BPF prog-id=259 op=LOAD Jan 14 01:12:39.758000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5658 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134343462663331663166396537613431666161653636373938366238 Jan 14 01:12:39.761000 audit: BPF prog-id=260 op=LOAD Jan 14 01:12:39.761000 audit: BPF prog-id=261 op=LOAD Jan 14 01:12:39.761000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5603 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356631633464303139386636316639633464373731343830333935 Jan 14 01:12:39.761000 audit: BPF prog-id=261 op=UNLOAD Jan 14 01:12:39.761000 audit[5668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5603 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356631633464303139386636316639633464373731343830333935 Jan 14 01:12:39.761000 audit: BPF prog-id=262 op=LOAD Jan 14 01:12:39.761000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5603 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356631633464303139386636316639633464373731343830333935 Jan 14 01:12:39.761000 audit: BPF prog-id=263 op=LOAD Jan 14 01:12:39.761000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5603 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356631633464303139386636316639633464373731343830333935 Jan 14 01:12:39.761000 audit: BPF prog-id=263 op=UNLOAD Jan 14 01:12:39.761000 audit[5668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5603 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356631633464303139386636316639633464373731343830333935 Jan 14 01:12:39.761000 audit: BPF prog-id=262 op=UNLOAD Jan 14 01:12:39.761000 audit[5668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5603 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356631633464303139386636316639633464373731343830333935 Jan 14 01:12:39.761000 audit: BPF prog-id=264 op=LOAD Jan 14 01:12:39.761000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5603 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:39.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538356631633464303139386636316639633464373731343830333935 Jan 14 01:12:39.799657 containerd[2498]: time="2026-01-14T01:12:39.799395807Z" level=info msg="StartContainer for \"e85f1c4d0198f61f9c4d771480395f86f32ba484955c54918634d3f1b0cb0101\" returns successfully" Jan 14 01:12:39.806197 containerd[2498]: time="2026-01-14T01:12:39.806173403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c84b9c95c-shkh8,Uid:486952cc-8944-4287-a101-bc04fbfa2173,Namespace:calico-system,Attempt:0,} returns sandbox id \"a444bf31f1f9e7a41faae667986b8863911a8cfe63543ee99d47cae994f8145a\"" Jan 14 01:12:39.809256 containerd[2498]: time="2026-01-14T01:12:39.809154333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:12:39.974117 systemd-networkd[2117]: calie5f0f77acad: Gained IPv6LL Jan 14 01:12:40.071340 containerd[2498]: time="2026-01-14T01:12:40.071297194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:40.074456 containerd[2498]: time="2026-01-14T01:12:40.074418277Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:12:40.074528 containerd[2498]: time="2026-01-14T01:12:40.074499992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:40.074675 kubelet[4014]: E0114 01:12:40.074642 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:40.075149 kubelet[4014]: E0114 01:12:40.074688 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:40.075149 kubelet[4014]: E0114 01:12:40.074860 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znmt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c84b9c95c-shkh8_calico-system(486952cc-8944-4287-a101-bc04fbfa2173): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:40.076425 kubelet[4014]: E0114 01:12:40.076396 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:12:40.322453 containerd[2498]: time="2026-01-14T01:12:40.322227313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rz5pp,Uid:e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:40.323003 containerd[2498]: time="2026-01-14T01:12:40.322928340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-cl2ls,Uid:8612edc9-7707-465b-bd59-44c1e0af599e,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:12:40.457800 kubelet[4014]: E0114 01:12:40.457378 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:12:40.466825 kubelet[4014]: E0114 01:12:40.466689 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:12:40.515003 kubelet[4014]: I0114 01:12:40.514394 4014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-47d4s" podStartSLOduration=42.514378682 podStartE2EDuration="42.514378682s" podCreationTimestamp="2026-01-14 01:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:12:40.513705007 +0000 UTC m=+46.302329673" watchObservedRunningTime="2026-01-14 01:12:40.514378682 +0000 UTC m=+46.303003346" Jan 14 01:12:41.254249 systemd-networkd[2117]: calic1a1a2fbf70: Gained IPv6LL Jan 14 01:12:41.446129 systemd-networkd[2117]: cali803ab337f8d: Gained IPv6LL Jan 14 01:12:41.467435 kubelet[4014]: E0114 01:12:41.467362 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:12:41.823502 systemd-networkd[2117]: calib4626f49739: Link UP Jan 14 01:12:41.823698 systemd-networkd[2117]: calib4626f49739: Gained carrier Jan 14 01:12:41.839244 kernel: kauditd_printk_skb: 372 callbacks suppressed Jan 14 01:12:41.839335 kernel: audit: type=1325 audit(1768353161.837:745): table=filter:138 family=2 entries=14 op=nft_register_rule pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:41.837000 audit[5772]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:41.837000 audit[5772]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3f18d210 a2=0 a3=7ffc3f18d1fc items=0 ppid=4120 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.850441 kernel: audit: type=1300 audit(1768353161.837:745): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3f18d210 a2=0 a3=7ffc3f18d1fc items=0 ppid=4120 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:41.855007 kernel: audit: type=1327 audit(1768353161.837:745): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:41.856000 audit[5772]: NETFILTER_CFG table=nat:139 family=2 entries=56 op=nft_register_chain pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:41.856000 audit[5772]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc3f18d210 a2=0 a3=7ffc3f18d1fc items=0 ppid=4120 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.863728 kernel: audit: type=1325 audit(1768353161.856:746): table=nat:139 family=2 entries=56 op=nft_register_chain pid=5772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:41.863776 kernel: audit: type=1300 audit(1768353161.856:746): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc3f18d210 a2=0 a3=7ffc3f18d1fc items=0 ppid=4120 pid=5772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:41.870957 kernel: audit: type=1327 audit(1768353161.856:746): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.408 [INFO][5729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0 calico-apiserver-78d8c97c7f- calico-apiserver 8612edc9-7707-465b-bd59-44c1e0af599e 856 0 2026-01-14 01:12:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78d8c97c7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d calico-apiserver-78d8c97c7f-cl2ls eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4626f49739 [] [] }} ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.408 [INFO][5729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.444 [INFO][5755] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" HandleID="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.444 [INFO][5755] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" HandleID="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b73a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"calico-apiserver-78d8c97c7f-cl2ls", "timestamp":"2026-01-14 01:12:40.444639588 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.445 [INFO][5755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.445 [INFO][5755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.445 [INFO][5755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.450 [INFO][5755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.457 [INFO][5755] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.479 [INFO][5755] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.482 [INFO][5755] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.486 [INFO][5755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.486 [INFO][5755] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.509 [INFO][5755] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:40.516 [INFO][5755] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:41.808 [INFO][5755] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.134/26] block=192.168.125.128/26 handle="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:41.808 [INFO][5755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.134/26] handle="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:41.809 [INFO][5755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:41.889306 containerd[2498]: 2026-01-14 01:12:41.809 [INFO][5755] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.134/26] IPv6=[] ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" HandleID="k8s-pod-network.489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" Jan 14 01:12:41.890560 containerd[2498]: 2026-01-14 01:12:41.813 [INFO][5729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0", GenerateName:"calico-apiserver-78d8c97c7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8612edc9-7707-465b-bd59-44c1e0af599e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8c97c7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"calico-apiserver-78d8c97c7f-cl2ls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4626f49739", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:41.890560 containerd[2498]: 2026-01-14 01:12:41.813 [INFO][5729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.134/32] ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" Jan 14 01:12:41.890560 containerd[2498]: 2026-01-14 01:12:41.813 [INFO][5729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4626f49739 ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" Jan 14 01:12:41.890560 containerd[2498]: 2026-01-14 01:12:41.827 [INFO][5729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" Jan 14 01:12:41.890560 containerd[2498]: 2026-01-14 01:12:41.832 [INFO][5729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0", GenerateName:"calico-apiserver-78d8c97c7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"8612edc9-7707-465b-bd59-44c1e0af599e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78d8c97c7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c", Pod:"calico-apiserver-78d8c97c7f-cl2ls", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4626f49739", MAC:"72:d4:1a:c4:1a:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:41.890560 containerd[2498]: 2026-01-14 01:12:41.886 [INFO][5729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" Namespace="calico-apiserver" Pod="calico-apiserver-78d8c97c7f-cl2ls" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-calico--apiserver--78d8c97c7f--cl2ls-eth0" Jan 14 01:12:41.901000 audit[5780]: NETFILTER_CFG table=filter:140 family=2 entries=53 op=nft_register_chain pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:41.905991 kernel: audit: type=1325 audit(1768353161.901:747): table=filter:140 family=2 entries=53 op=nft_register_chain pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:41.901000 audit[5780]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7ffe19134f40 a2=0 a3=7ffe19134f2c items=0 ppid=5218 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.901000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:41.913958 kernel: audit: type=1300 audit(1768353161.901:747): arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7ffe19134f40 a2=0 a3=7ffe19134f2c items=0 ppid=5218 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.914014 kernel: audit: type=1327 audit(1768353161.901:747): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:41.931996 containerd[2498]: time="2026-01-14T01:12:41.931902239Z" level=info msg="connecting to shim 489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c" address="unix:///run/containerd/s/d0631880edc6eee8de349cf1f161e780e5233c21e842c342de66ee95b082867a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:41.960219 systemd[1]: Started cri-containerd-489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c.scope - libcontainer container 489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c. Jan 14 01:12:41.986372 kernel: audit: type=1334 audit(1768353161.982:748): prog-id=265 op=LOAD Jan 14 01:12:41.982000 audit: BPF prog-id=265 op=LOAD Jan 14 01:12:41.984000 audit: BPF prog-id=266 op=LOAD Jan 14 01:12:41.984000 audit[5801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5790 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396636303935376639663830336437343133643335306332353137 Jan 14 01:12:41.984000 audit: BPF prog-id=266 op=UNLOAD Jan 14 01:12:41.984000 audit[5801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5790 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396636303935376639663830336437343133643335306332353137 Jan 14 01:12:41.984000 audit: BPF prog-id=267 op=LOAD Jan 14 01:12:41.984000 audit[5801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5790 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396636303935376639663830336437343133643335306332353137 Jan 14 01:12:41.984000 audit: BPF prog-id=268 op=LOAD Jan 14 01:12:41.984000 audit[5801]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5790 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396636303935376639663830336437343133643335306332353137 Jan 14 01:12:41.984000 audit: BPF prog-id=268 op=UNLOAD Jan 14 01:12:41.984000 audit[5801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5790 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396636303935376639663830336437343133643335306332353137 Jan 14 01:12:41.984000 audit: BPF prog-id=267 op=UNLOAD Jan 14 01:12:41.984000 audit[5801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5790 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396636303935376639663830336437343133643335306332353137 Jan 14 01:12:41.984000 audit: BPF prog-id=269 op=LOAD Jan 14 01:12:41.984000 audit[5801]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5790 pid=5801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:41.984000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438396636303935376639663830336437343133643335306332353137 Jan 14 01:12:42.018122 systemd-networkd[2117]: cali2dd647a646d: Link UP Jan 14 01:12:42.020274 systemd-networkd[2117]: cali2dd647a646d: Gained carrier Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:40.408 [INFO][5733] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0 goldmane-666569f655- calico-system e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8 855 0 2026-01-14 01:12:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d goldmane-666569f655-rz5pp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2dd647a646d [] [] }} ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:40.409 [INFO][5733] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:40.443 [INFO][5761] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" HandleID="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:40.445 [INFO][5761] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" HandleID="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"goldmane-666569f655-rz5pp", "timestamp":"2026-01-14 01:12:40.44356683 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:40.445 [INFO][5761] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.808 [INFO][5761] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.808 [INFO][5761] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.886 [INFO][5761] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.965 [INFO][5761] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.969 [INFO][5761] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.971 [INFO][5761] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.973 [INFO][5761] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.973 [INFO][5761] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.974 [INFO][5761] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578 Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:41.980 [INFO][5761] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:42.006 [INFO][5761] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.135/26] block=192.168.125.128/26 handle="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:42.007 [INFO][5761] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.135/26] handle="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:42.007 [INFO][5761] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:42.042865 containerd[2498]: 2026-01-14 01:12:42.007 [INFO][5761] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.135/26] IPv6=[] ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" HandleID="k8s-pod-network.e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" Jan 14 01:12:42.044535 containerd[2498]: 2026-01-14 01:12:42.009 [INFO][5733] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"goldmane-666569f655-rz5pp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2dd647a646d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:42.044535 containerd[2498]: 2026-01-14 01:12:42.009 [INFO][5733] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.135/32] ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" Jan 14 01:12:42.044535 containerd[2498]: 2026-01-14 01:12:42.009 [INFO][5733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2dd647a646d ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" Jan 14 01:12:42.044535 containerd[2498]: 2026-01-14 01:12:42.021 [INFO][5733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" Jan 14 01:12:42.044535 containerd[2498]: 2026-01-14 01:12:42.021 [INFO][5733] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578", Pod:"goldmane-666569f655-rz5pp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2dd647a646d", MAC:"66:b8:1b:00:57:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:42.044535 containerd[2498]: 2026-01-14 01:12:42.038 [INFO][5733] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" Namespace="calico-system" Pod="goldmane-666569f655-rz5pp" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-goldmane--666569f655--rz5pp-eth0" Jan 14 01:12:42.053555 containerd[2498]: time="2026-01-14T01:12:42.053513439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78d8c97c7f-cl2ls,Uid:8612edc9-7707-465b-bd59-44c1e0af599e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"489f60957f9f803d7413d350c2517077050cacf9a1d5521551b7e390632db49c\"" Jan 14 01:12:42.060078 containerd[2498]: time="2026-01-14T01:12:42.059733890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:42.078000 audit[5837]: NETFILTER_CFG table=filter:141 family=2 entries=64 op=nft_register_chain pid=5837 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:42.078000 audit[5837]: SYSCALL arch=c000003e syscall=46 success=yes exit=31120 a0=3 a1=7fff524ea830 a2=0 a3=7fff524ea81c items=0 ppid=5218 pid=5837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.078000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:42.102185 containerd[2498]: time="2026-01-14T01:12:42.102124175Z" level=info msg="connecting to shim e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578" address="unix:///run/containerd/s/2fc44a2a3100ec52ff5b2c2e7521a5e15706548d1cca371ad585c019760ec443" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:42.134287 systemd[1]: Started cri-containerd-e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578.scope - libcontainer container e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578. Jan 14 01:12:42.147000 audit: BPF prog-id=270 op=LOAD Jan 14 01:12:42.147000 audit: BPF prog-id=271 op=LOAD Jan 14 01:12:42.147000 audit[5859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5846 pid=5859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363039653935656330353435643763386238376164643463343034 Jan 14 01:12:42.148000 audit: BPF prog-id=271 op=UNLOAD Jan 14 01:12:42.148000 audit[5859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5846 pid=5859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363039653935656330353435643763386238376164643463343034 Jan 14 01:12:42.148000 audit: BPF prog-id=272 op=LOAD Jan 14 01:12:42.148000 audit[5859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5846 pid=5859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363039653935656330353435643763386238376164643463343034 Jan 14 01:12:42.148000 audit: BPF prog-id=273 op=LOAD Jan 14 01:12:42.148000 audit[5859]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5846 pid=5859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363039653935656330353435643763386238376164643463343034 Jan 14 01:12:42.148000 audit: BPF prog-id=273 op=UNLOAD Jan 14 01:12:42.148000 audit[5859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5846 pid=5859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363039653935656330353435643763386238376164643463343034 Jan 14 01:12:42.148000 audit: BPF prog-id=272 op=UNLOAD Jan 14 01:12:42.148000 audit[5859]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5846 pid=5859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363039653935656330353435643763386238376164643463343034 Jan 14 01:12:42.148000 audit: BPF prog-id=274 op=LOAD Jan 14 01:12:42.148000 audit[5859]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5846 pid=5859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538363039653935656330353435643763386238376164643463343034 Jan 14 01:12:42.199389 containerd[2498]: time="2026-01-14T01:12:42.199311511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-rz5pp,Uid:e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8609e95ec0545d7c8b87add4c4042781d1d21285c71d1b3da03b7fca5397578\"" Jan 14 01:12:42.318074 containerd[2498]: time="2026-01-14T01:12:42.318017496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96vlw,Uid:a24a17a9-73d6-4ce8-b8ef-5be32d60ba56,Namespace:calico-system,Attempt:0,}" Jan 14 01:12:42.326663 containerd[2498]: time="2026-01-14T01:12:42.326631037Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:42.334540 containerd[2498]: time="2026-01-14T01:12:42.334447636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:42.334540 containerd[2498]: time="2026-01-14T01:12:42.334520117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:42.346609 kubelet[4014]: E0114 01:12:42.346305 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:42.346609 kubelet[4014]: E0114 01:12:42.346347 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:42.346609 kubelet[4014]: E0114 01:12:42.346563 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-955v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-cl2ls_calico-apiserver(8612edc9-7707-465b-bd59-44c1e0af599e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:42.347745 containerd[2498]: time="2026-01-14T01:12:42.346925411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:12:42.347940 kubelet[4014]: E0114 01:12:42.347878 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:12:42.471816 systemd-networkd[2117]: calib983f7bf06c: Link UP Jan 14 01:12:42.473621 systemd-networkd[2117]: calib983f7bf06c: Gained carrier Jan 14 01:12:42.475735 kubelet[4014]: E0114 01:12:42.475512 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.361 [INFO][5889] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0 csi-node-driver- calico-system a24a17a9-73d6-4ce8-b8ef-5be32d60ba56 743 0 2026-01-14 01:12:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4578.0.0-p-4dd79cf71d csi-node-driver-96vlw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib983f7bf06c [] [] }} ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.361 [INFO][5889] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.384 [INFO][5897] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" HandleID="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.384 [INFO][5897] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" HandleID="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-4dd79cf71d", "pod":"csi-node-driver-96vlw", "timestamp":"2026-01-14 01:12:42.384607166 +0000 UTC"}, Hostname:"ci-4578.0.0-p-4dd79cf71d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.384 [INFO][5897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.384 [INFO][5897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.384 [INFO][5897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-4dd79cf71d' Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.415 [INFO][5897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.419 [INFO][5897] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.424 [INFO][5897] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.426 [INFO][5897] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.427 [INFO][5897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.428 [INFO][5897] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.430 [INFO][5897] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.434 [INFO][5897] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.462 [INFO][5897] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.136/26] block=192.168.125.128/26 handle="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.462 [INFO][5897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.136/26] handle="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" host="ci-4578.0.0-p-4dd79cf71d" Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.462 [INFO][5897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:12:42.517950 containerd[2498]: 2026-01-14 01:12:42.462 [INFO][5897] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.136/26] IPv6=[] ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" HandleID="k8s-pod-network.317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Workload="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" Jan 14 01:12:42.518648 containerd[2498]: 2026-01-14 01:12:42.465 [INFO][5889] cni-plugin/k8s.go 418: Populated endpoint ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a24a17a9-73d6-4ce8-b8ef-5be32d60ba56", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"", Pod:"csi-node-driver-96vlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib983f7bf06c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:42.518648 containerd[2498]: 2026-01-14 01:12:42.465 [INFO][5889] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.136/32] ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" Jan 14 01:12:42.518648 containerd[2498]: 2026-01-14 01:12:42.465 [INFO][5889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib983f7bf06c ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" Jan 14 01:12:42.518648 containerd[2498]: 2026-01-14 01:12:42.474 [INFO][5889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" Jan 14 01:12:42.518648 containerd[2498]: 2026-01-14 01:12:42.476 [INFO][5889] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a24a17a9-73d6-4ce8-b8ef-5be32d60ba56", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-4dd79cf71d", ContainerID:"317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea", Pod:"csi-node-driver-96vlw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib983f7bf06c", MAC:"de:21:be:92:c1:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:12:42.518648 containerd[2498]: 2026-01-14 01:12:42.513 [INFO][5889] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" Namespace="calico-system" Pod="csi-node-driver-96vlw" WorkloadEndpoint="ci--4578.0.0--p--4dd79cf71d-k8s-csi--node--driver--96vlw-eth0" Jan 14 01:12:42.571005 containerd[2498]: time="2026-01-14T01:12:42.570364848Z" level=info msg="connecting to shim 317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea" address="unix:///run/containerd/s/f58d7ba9ab31aa9d1fe81593fb7f57c3ca5f03a4da07ce31e75f8cdee89d2b13" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:12:42.610194 systemd[1]: Started cri-containerd-317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea.scope - libcontainer container 317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea. Jan 14 01:12:42.614863 containerd[2498]: time="2026-01-14T01:12:42.614833673Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:42.618485 containerd[2498]: time="2026-01-14T01:12:42.618451129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:12:42.618645 containerd[2498]: time="2026-01-14T01:12:42.618496157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:42.618828 kubelet[4014]: E0114 01:12:42.618780 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:42.618968 kubelet[4014]: E0114 01:12:42.618910 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:42.619739 kubelet[4014]: E0114 01:12:42.619691 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkzbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rz5pp_calico-system(e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:42.621953 kubelet[4014]: E0114 01:12:42.621785 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:12:42.629000 audit[5949]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:42.629000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb1703500 a2=0 a3=7ffeb17034ec items=0 ppid=4120 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:42.634000 audit[5949]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:42.634000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb1703500 a2=0 a3=7ffeb17034ec items=0 ppid=4120 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.634000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:42.637000 audit: BPF prog-id=275 op=LOAD Jan 14 01:12:42.637000 audit: BPF prog-id=276 op=LOAD Jan 14 01:12:42.637000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5923 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376236366433616666643339353661323164383466616661623764 Jan 14 01:12:42.637000 audit: BPF prog-id=276 op=UNLOAD Jan 14 01:12:42.637000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5923 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376236366433616666643339353661323164383466616661623764 Jan 14 01:12:42.637000 audit: BPF prog-id=277 op=LOAD Jan 14 01:12:42.637000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5923 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376236366433616666643339353661323164383466616661623764 Jan 14 01:12:42.637000 audit: BPF prog-id=278 op=LOAD Jan 14 01:12:42.637000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5923 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376236366433616666643339353661323164383466616661623764 Jan 14 01:12:42.637000 audit: BPF prog-id=278 op=UNLOAD Jan 14 01:12:42.637000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5923 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376236366433616666643339353661323164383466616661623764 Jan 14 01:12:42.637000 audit: BPF prog-id=277 op=UNLOAD Jan 14 01:12:42.637000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5923 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376236366433616666643339353661323164383466616661623764 Jan 14 01:12:42.637000 audit: BPF prog-id=279 op=LOAD Jan 14 01:12:42.637000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5923 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331376236366433616666643339353661323164383466616661623764 Jan 14 01:12:42.650000 audit[5958]: NETFILTER_CFG table=filter:144 family=2 entries=66 op=nft_register_chain pid=5958 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:12:42.650000 audit[5958]: SYSCALL arch=c000003e syscall=46 success=yes exit=29556 a0=3 a1=7fffe85a5490 a2=0 a3=7fffe85a547c items=0 ppid=5218 pid=5958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:42.650000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:12:42.659366 containerd[2498]: time="2026-01-14T01:12:42.659143238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-96vlw,Uid:a24a17a9-73d6-4ce8-b8ef-5be32d60ba56,Namespace:calico-system,Attempt:0,} returns sandbox id \"317b66d3affd3956a21d84fafab7d8f25da37c75280e0bba2003cea9f21f10ea\"" Jan 14 01:12:42.660406 containerd[2498]: time="2026-01-14T01:12:42.660390903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:12:42.925902 containerd[2498]: time="2026-01-14T01:12:42.925339499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:42.928306 containerd[2498]: time="2026-01-14T01:12:42.928271948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:12:42.928392 containerd[2498]: time="2026-01-14T01:12:42.928368988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:42.928579 kubelet[4014]: E0114 01:12:42.928549 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:42.928631 kubelet[4014]: E0114 01:12:42.928594 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:42.928766 kubelet[4014]: E0114 01:12:42.928725 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:42.930934 containerd[2498]: time="2026-01-14T01:12:42.930909940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:12:43.220179 containerd[2498]: time="2026-01-14T01:12:43.220062967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:43.232114 containerd[2498]: time="2026-01-14T01:12:43.232023952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:12:43.232114 containerd[2498]: time="2026-01-14T01:12:43.232092464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:43.232392 kubelet[4014]: E0114 01:12:43.232288 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:43.232392 kubelet[4014]: E0114 01:12:43.232337 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:43.232710 kubelet[4014]: E0114 01:12:43.232663 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:43.233870 kubelet[4014]: E0114 01:12:43.233804 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:43.302261 systemd-networkd[2117]: calib4626f49739: Gained IPv6LL Jan 14 01:12:43.478331 kubelet[4014]: E0114 01:12:43.478215 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:12:43.479850 kubelet[4014]: E0114 01:12:43.479019 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:12:43.481061 kubelet[4014]: E0114 01:12:43.481030 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:43.622123 systemd-networkd[2117]: calib983f7bf06c: Gained IPv6LL Jan 14 01:12:43.720000 audit[5965]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:43.720000 audit[5965]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc4083680 a2=0 a3=7ffcc408366c items=0 ppid=4120 pid=5965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:43.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:43.727000 audit[5965]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:12:43.727000 audit[5965]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcc4083680 a2=0 a3=7ffcc408366c items=0 ppid=4120 pid=5965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:12:43.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:12:43.814356 systemd-networkd[2117]: cali2dd647a646d: Gained IPv6LL Jan 14 01:12:44.480433 kubelet[4014]: E0114 01:12:44.479630 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:12:51.318165 containerd[2498]: time="2026-01-14T01:12:51.318069489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:12:51.605757 containerd[2498]: time="2026-01-14T01:12:51.605624242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:51.608280 containerd[2498]: time="2026-01-14T01:12:51.608237982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:12:51.608343 containerd[2498]: time="2026-01-14T01:12:51.608316394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:51.608499 kubelet[4014]: E0114 01:12:51.608441 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:51.608804 kubelet[4014]: E0114 01:12:51.608510 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:12:51.608804 kubelet[4014]: E0114 01:12:51.608638 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b08deebfb5504960b33ee107ab7f4f73,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:51.610831 containerd[2498]: time="2026-01-14T01:12:51.610800946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:12:51.871044 containerd[2498]: time="2026-01-14T01:12:51.870995833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:51.874559 containerd[2498]: time="2026-01-14T01:12:51.874532364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:12:51.874653 containerd[2498]: time="2026-01-14T01:12:51.874543972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:51.874829 kubelet[4014]: E0114 01:12:51.874788 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:51.874923 kubelet[4014]: E0114 01:12:51.874844 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:12:51.875306 kubelet[4014]: E0114 01:12:51.875002 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:51.876602 kubelet[4014]: E0114 01:12:51.876559 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:12:52.320061 containerd[2498]: time="2026-01-14T01:12:52.319589917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:12:52.589778 containerd[2498]: time="2026-01-14T01:12:52.589651306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:52.597757 containerd[2498]: time="2026-01-14T01:12:52.597712949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:12:52.597838 containerd[2498]: time="2026-01-14T01:12:52.597716780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:52.598002 kubelet[4014]: E0114 01:12:52.597956 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:52.598060 kubelet[4014]: E0114 01:12:52.598015 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:12:52.598507 kubelet[4014]: E0114 01:12:52.598300 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znmt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c84b9c95c-shkh8_calico-system(486952cc-8944-4287-a101-bc04fbfa2173): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:52.598664 containerd[2498]: time="2026-01-14T01:12:52.598519301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:52.599530 kubelet[4014]: E0114 01:12:52.599492 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:12:52.855770 containerd[2498]: time="2026-01-14T01:12:52.855645132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:52.859663 containerd[2498]: time="2026-01-14T01:12:52.859613985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:52.859663 containerd[2498]: time="2026-01-14T01:12:52.859640314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:52.859856 kubelet[4014]: E0114 01:12:52.859821 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:52.860199 kubelet[4014]: E0114 01:12:52.859875 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:52.860199 kubelet[4014]: E0114 01:12:52.860036 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkpqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-fb6hx_calico-apiserver(7c277774-5617-4094-89b3-d4c788250cae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:52.861991 kubelet[4014]: E0114 01:12:52.861520 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:12:54.319673 containerd[2498]: time="2026-01-14T01:12:54.319403921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:12:54.582188 containerd[2498]: time="2026-01-14T01:12:54.582060464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:54.584857 containerd[2498]: time="2026-01-14T01:12:54.584802021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:12:54.584950 containerd[2498]: time="2026-01-14T01:12:54.584884988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:54.585055 kubelet[4014]: E0114 01:12:54.585006 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:54.585350 kubelet[4014]: E0114 01:12:54.585063 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:12:54.585350 kubelet[4014]: E0114 01:12:54.585218 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkzbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rz5pp_calico-system(e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:54.586994 kubelet[4014]: E0114 01:12:54.586721 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:12:55.319618 containerd[2498]: time="2026-01-14T01:12:55.318816595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:12:55.577859 containerd[2498]: time="2026-01-14T01:12:55.577733572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:55.582903 containerd[2498]: time="2026-01-14T01:12:55.582828793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:12:55.582903 containerd[2498]: time="2026-01-14T01:12:55.582879816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:55.583058 kubelet[4014]: E0114 01:12:55.583026 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:55.583102 kubelet[4014]: E0114 01:12:55.583087 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:12:55.583269 kubelet[4014]: E0114 01:12:55.583229 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-955v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-cl2ls_calico-apiserver(8612edc9-7707-465b-bd59-44c1e0af599e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:55.584736 kubelet[4014]: E0114 01:12:55.584697 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:12:57.319010 containerd[2498]: time="2026-01-14T01:12:57.318724514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:12:57.584106 containerd[2498]: time="2026-01-14T01:12:57.583940361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:57.587600 containerd[2498]: time="2026-01-14T01:12:57.587556477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:12:57.587697 containerd[2498]: time="2026-01-14T01:12:57.587636851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:57.587809 kubelet[4014]: E0114 01:12:57.587757 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:57.588170 kubelet[4014]: E0114 01:12:57.587803 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:12:57.588170 kubelet[4014]: E0114 01:12:57.587940 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:57.591135 containerd[2498]: time="2026-01-14T01:12:57.591101018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:12:57.876700 containerd[2498]: time="2026-01-14T01:12:57.876658223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:12:57.882988 containerd[2498]: time="2026-01-14T01:12:57.882940152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:12:57.883046 containerd[2498]: time="2026-01-14T01:12:57.882951063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:12:57.883241 kubelet[4014]: E0114 01:12:57.883209 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:57.883317 kubelet[4014]: E0114 01:12:57.883257 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:12:57.883523 kubelet[4014]: E0114 01:12:57.883407 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:12:57.884793 kubelet[4014]: E0114 01:12:57.884754 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:13:03.318368 kubelet[4014]: E0114 01:13:03.317988 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:13:06.319467 kubelet[4014]: E0114 01:13:06.319412 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:13:07.319103 kubelet[4014]: E0114 01:13:07.319059 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:13:07.320619 kubelet[4014]: E0114 01:13:07.320564 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:13:08.321447 kubelet[4014]: E0114 01:13:08.321335 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:13:08.323606 kubelet[4014]: E0114 01:13:08.323565 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:13:14.320480 containerd[2498]: time="2026-01-14T01:13:14.320359499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:13:14.588478 containerd[2498]: time="2026-01-14T01:13:14.588355937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:14.591279 containerd[2498]: time="2026-01-14T01:13:14.591245255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:13:14.591347 containerd[2498]: time="2026-01-14T01:13:14.591322467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:14.591502 kubelet[4014]: E0114 01:13:14.591455 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:13:14.591826 kubelet[4014]: E0114 01:13:14.591512 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:13:14.591826 kubelet[4014]: E0114 01:13:14.591672 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znmt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c84b9c95c-shkh8_calico-system(486952cc-8944-4287-a101-bc04fbfa2173): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:14.593229 kubelet[4014]: E0114 01:13:14.593173 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:13:18.320997 containerd[2498]: time="2026-01-14T01:13:18.320795361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:13:18.585061 containerd[2498]: time="2026-01-14T01:13:18.584909909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:18.588126 containerd[2498]: time="2026-01-14T01:13:18.588076381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:13:18.588277 containerd[2498]: time="2026-01-14T01:13:18.588168718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:18.588341 kubelet[4014]: E0114 01:13:18.588297 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:13:18.588628 kubelet[4014]: E0114 01:13:18.588350 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:13:18.588628 kubelet[4014]: E0114 01:13:18.588509 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkzbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rz5pp_calico-system(e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:18.590034 kubelet[4014]: E0114 01:13:18.589994 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:13:20.330992 containerd[2498]: time="2026-01-14T01:13:20.330887216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:13:20.608143 containerd[2498]: time="2026-01-14T01:13:20.608015562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:20.610857 containerd[2498]: time="2026-01-14T01:13:20.610806047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:13:20.610857 containerd[2498]: time="2026-01-14T01:13:20.610833926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:20.611270 kubelet[4014]: E0114 01:13:20.611025 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:13:20.611270 kubelet[4014]: E0114 01:13:20.611063 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:13:20.611587 kubelet[4014]: E0114 01:13:20.611237 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b08deebfb5504960b33ee107ab7f4f73,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:20.612213 containerd[2498]: time="2026-01-14T01:13:20.612185427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:13:20.881639 containerd[2498]: time="2026-01-14T01:13:20.881585725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:20.884678 containerd[2498]: time="2026-01-14T01:13:20.884633966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:13:20.884790 containerd[2498]: time="2026-01-14T01:13:20.884738426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:20.885163 kubelet[4014]: E0114 01:13:20.885117 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:20.885254 kubelet[4014]: E0114 01:13:20.885177 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:20.885667 containerd[2498]: time="2026-01-14T01:13:20.885610450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:13:20.885898 kubelet[4014]: E0114 01:13:20.885837 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkpqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-fb6hx_calico-apiserver(7c277774-5617-4094-89b3-d4c788250cae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:20.888176 kubelet[4014]: E0114 01:13:20.887061 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:13:21.156487 containerd[2498]: time="2026-01-14T01:13:21.156355068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:21.159524 containerd[2498]: time="2026-01-14T01:13:21.159492693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:13:21.159611 containerd[2498]: time="2026-01-14T01:13:21.159586431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:21.159744 kubelet[4014]: E0114 01:13:21.159700 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:13:21.159802 kubelet[4014]: E0114 01:13:21.159757 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:13:21.159922 kubelet[4014]: E0114 01:13:21.159891 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:21.161181 kubelet[4014]: E0114 01:13:21.161121 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:13:21.321003 containerd[2498]: time="2026-01-14T01:13:21.320668831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:13:21.602210 containerd[2498]: time="2026-01-14T01:13:21.602082001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:21.606998 containerd[2498]: time="2026-01-14T01:13:21.606471829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:13:21.606998 containerd[2498]: time="2026-01-14T01:13:21.606571299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:21.607308 kubelet[4014]: E0114 01:13:21.607258 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:21.607412 kubelet[4014]: E0114 01:13:21.607396 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:13:21.608084 kubelet[4014]: E0114 01:13:21.607788 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-955v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-cl2ls_calico-apiserver(8612edc9-7707-465b-bd59-44c1e0af599e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:21.609302 kubelet[4014]: E0114 01:13:21.609253 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:13:23.322747 containerd[2498]: time="2026-01-14T01:13:23.322495993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:13:23.587409 containerd[2498]: time="2026-01-14T01:13:23.587147538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:23.592025 containerd[2498]: time="2026-01-14T01:13:23.591939528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:13:23.592229 containerd[2498]: time="2026-01-14T01:13:23.591986782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:23.592347 kubelet[4014]: E0114 01:13:23.592305 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:13:23.592647 kubelet[4014]: E0114 01:13:23.592355 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:13:23.593126 kubelet[4014]: E0114 01:13:23.593067 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:23.596394 containerd[2498]: time="2026-01-14T01:13:23.596359891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:13:23.924267 containerd[2498]: time="2026-01-14T01:13:23.924219254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:23.927325 containerd[2498]: time="2026-01-14T01:13:23.927288176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:13:23.927396 containerd[2498]: time="2026-01-14T01:13:23.927375231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:23.927564 kubelet[4014]: E0114 01:13:23.927517 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:13:23.927637 kubelet[4014]: E0114 01:13:23.927578 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:13:23.927793 kubelet[4014]: E0114 01:13:23.927748 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:23.929090 kubelet[4014]: E0114 01:13:23.929030 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:13:29.318417 kubelet[4014]: E0114 01:13:29.318325 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:13:29.318417 kubelet[4014]: E0114 01:13:29.318348 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:13:33.321358 kubelet[4014]: E0114 01:13:33.321304 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:13:34.323750 kubelet[4014]: E0114 01:13:34.323165 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:13:35.319307 kubelet[4014]: E0114 01:13:35.319258 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:13:36.320468 kubelet[4014]: E0114 01:13:36.320422 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:13:43.319455 kubelet[4014]: E0114 01:13:43.319153 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:13:43.319455 kubelet[4014]: E0114 01:13:43.319137 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:13:44.647179 systemd[1]: Started sshd@7-10.200.4.37:22-10.200.16.10:41826.service - OpenSSH per-connection server daemon (10.200.16.10:41826). Jan 14 01:13:44.655846 kernel: kauditd_printk_skb: 83 callbacks suppressed Jan 14 01:13:44.655937 kernel: audit: type=1130 audit(1768353224.648:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.37:22-10.200.16.10:41826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:44.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.37:22-10.200.16.10:41826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:45.207000 audit[6088]: USER_ACCT pid=6088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.218000 kernel: audit: type=1101 audit(1768353225.207:779): pid=6088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.219100 sshd[6088]: Accepted publickey for core from 10.200.16.10 port 41826 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:13:45.219371 sshd-session[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:45.217000 audit[6088]: CRED_ACQ pid=6088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.227769 kernel: audit: type=1103 audit(1768353225.217:780): pid=6088 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.233003 kernel: audit: type=1006 audit(1768353225.217:781): pid=6088 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:13:45.217000 audit[6088]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5b9e9900 a2=3 a3=0 items=0 ppid=1 pid=6088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:45.217000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:45.246004 kernel: audit: type=1300 audit(1768353225.217:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd5b9e9900 a2=3 a3=0 items=0 ppid=1 pid=6088 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:45.246061 kernel: audit: type=1327 audit(1768353225.217:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:45.246610 systemd-logind[2463]: New session 11 of user core. Jan 14 01:13:45.254164 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:13:45.259000 audit[6088]: USER_START pid=6088 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.272007 kernel: audit: type=1105 audit(1768353225.259:782): pid=6088 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.271000 audit[6092]: CRED_ACQ pid=6092 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.282994 kernel: audit: type=1103 audit(1768353225.271:783): pid=6092 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.321157 kubelet[4014]: E0114 01:13:45.321045 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:13:45.592284 sshd[6092]: Connection closed by 10.200.16.10 port 41826 Jan 14 01:13:45.594481 sshd-session[6088]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:45.595000 audit[6088]: USER_END pid=6088 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.608014 kernel: audit: type=1106 audit(1768353225.595:784): pid=6088 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.606000 audit[6088]: CRED_DISP pid=6088 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.610546 systemd[1]: sshd@7-10.200.4.37:22-10.200.16.10:41826.service: Deactivated successfully. Jan 14 01:13:45.617381 kernel: audit: type=1104 audit(1768353225.606:785): pid=6088 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:45.615853 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:13:45.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.37:22-10.200.16.10:41826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:45.621113 systemd-logind[2463]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:13:45.622244 systemd-logind[2463]: Removed session 11. Jan 14 01:13:48.320225 kubelet[4014]: E0114 01:13:48.320172 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:13:48.322112 kubelet[4014]: E0114 01:13:48.320898 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:13:49.320702 kubelet[4014]: E0114 01:13:49.320107 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:13:50.720425 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:13:50.720556 kernel: audit: type=1130 audit(1768353230.708:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.37:22-10.200.16.10:38804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:50.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.37:22-10.200.16.10:38804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:50.709417 systemd[1]: Started sshd@8-10.200.4.37:22-10.200.16.10:38804.service - OpenSSH per-connection server daemon (10.200.16.10:38804). Jan 14 01:13:51.271000 audit[6105]: USER_ACCT pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.272995 sshd[6105]: Accepted publickey for core from 10.200.16.10 port 38804 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:13:51.275199 sshd-session[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:51.273000 audit[6105]: CRED_ACQ pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.283382 systemd-logind[2463]: New session 12 of user core. Jan 14 01:13:51.284623 kernel: audit: type=1101 audit(1768353231.271:788): pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.284671 kernel: audit: type=1103 audit(1768353231.273:789): pid=6105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.289177 kernel: audit: type=1006 audit(1768353231.273:790): pid=6105 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:13:51.289245 kernel: audit: type=1300 audit(1768353231.273:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1ae8cd80 a2=3 a3=0 items=0 ppid=1 pid=6105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:51.273000 audit[6105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1ae8cd80 a2=3 a3=0 items=0 ppid=1 pid=6105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:51.289590 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:13:51.297233 kernel: audit: type=1327 audit(1768353231.273:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:51.273000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:51.293000 audit[6105]: USER_START pid=6105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.302139 kernel: audit: type=1105 audit(1768353231.293:791): pid=6105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.295000 audit[6109]: CRED_ACQ pid=6109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.309992 kernel: audit: type=1103 audit(1768353231.295:792): pid=6109 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.636490 sshd[6109]: Connection closed by 10.200.16.10 port 38804 Jan 14 01:13:51.637125 sshd-session[6105]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:51.637000 audit[6105]: USER_END pid=6105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.645004 kernel: audit: type=1106 audit(1768353231.637:793): pid=6105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.645313 systemd[1]: sshd@8-10.200.4.37:22-10.200.16.10:38804.service: Deactivated successfully. Jan 14 01:13:51.637000 audit[6105]: CRED_DISP pid=6105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.651301 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:13:51.653002 kernel: audit: type=1104 audit(1768353231.637:794): pid=6105 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:51.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.37:22-10.200.16.10:38804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:51.655126 systemd-logind[2463]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:13:51.656364 systemd-logind[2463]: Removed session 12. Jan 14 01:13:55.318907 containerd[2498]: time="2026-01-14T01:13:55.318873668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:13:55.605508 containerd[2498]: time="2026-01-14T01:13:55.605208071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:13:55.608991 containerd[2498]: time="2026-01-14T01:13:55.608057700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:13:55.608991 containerd[2498]: time="2026-01-14T01:13:55.608155767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:13:55.609134 kubelet[4014]: E0114 01:13:55.608327 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:13:55.609134 kubelet[4014]: E0114 01:13:55.608390 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:13:55.609134 kubelet[4014]: E0114 01:13:55.608563 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znmt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c84b9c95c-shkh8_calico-system(486952cc-8944-4287-a101-bc04fbfa2173): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:13:55.609929 kubelet[4014]: E0114 01:13:55.609892 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:13:56.761515 systemd[1]: Started sshd@9-10.200.4.37:22-10.200.16.10:38810.service - OpenSSH per-connection server daemon (10.200.16.10:38810). Jan 14 01:13:56.769433 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:13:56.769466 kernel: audit: type=1130 audit(1768353236.761:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.37:22-10.200.16.10:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:56.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.37:22-10.200.16.10:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:57.319000 audit[6124]: USER_ACCT pid=6124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.326678 sshd[6124]: Accepted publickey for core from 10.200.16.10 port 38810 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:13:57.327842 kernel: audit: type=1101 audit(1768353237.319:797): pid=6124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.329092 sshd-session[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:13:57.327000 audit[6124]: CRED_ACQ pid=6124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.339461 kernel: audit: type=1103 audit(1768353237.327:798): pid=6124 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.343805 systemd-logind[2463]: New session 13 of user core. Jan 14 01:13:57.349009 kernel: audit: type=1006 audit(1768353237.327:799): pid=6124 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:13:57.327000 audit[6124]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3a19eab0 a2=3 a3=0 items=0 ppid=1 pid=6124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:57.356651 kernel: audit: type=1300 audit(1768353237.327:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3a19eab0 a2=3 a3=0 items=0 ppid=1 pid=6124 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:13:57.356791 kernel: audit: type=1327 audit(1768353237.327:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:57.327000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:13:57.357058 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:13:57.370108 kernel: audit: type=1105 audit(1768353237.361:800): pid=6124 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.361000 audit[6124]: USER_START pid=6124 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.369000 audit[6128]: CRED_ACQ pid=6128 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.376148 kernel: audit: type=1103 audit(1768353237.369:801): pid=6128 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.685818 sshd[6128]: Connection closed by 10.200.16.10 port 38810 Jan 14 01:13:57.686997 sshd-session[6124]: pam_unix(sshd:session): session closed for user core Jan 14 01:13:57.687000 audit[6124]: USER_END pid=6124 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.691829 systemd[1]: sshd@9-10.200.4.37:22-10.200.16.10:38810.service: Deactivated successfully. Jan 14 01:13:57.694469 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:13:57.697868 systemd-logind[2463]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:13:57.698716 systemd-logind[2463]: Removed session 13. Jan 14 01:13:57.687000 audit[6124]: CRED_DISP pid=6124 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.707728 kernel: audit: type=1106 audit(1768353237.687:802): pid=6124 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.707808 kernel: audit: type=1104 audit(1768353237.687:803): pid=6124 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:13:57.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.37:22-10.200.16.10:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:13:58.320028 kubelet[4014]: E0114 01:13:58.319553 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:13:59.319114 kubelet[4014]: E0114 01:13:59.319059 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:14:00.322120 kubelet[4014]: E0114 01:14:00.322071 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:14:01.318159 kubelet[4014]: E0114 01:14:01.318079 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:14:01.318833 kubelet[4014]: E0114 01:14:01.318778 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:14:02.807566 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:02.807654 kernel: audit: type=1130 audit(1768353242.801:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.37:22-10.200.16.10:43716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:02.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.37:22-10.200.16.10:43716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:02.803346 systemd[1]: Started sshd@10-10.200.4.37:22-10.200.16.10:43716.service - OpenSSH per-connection server daemon (10.200.16.10:43716). Jan 14 01:14:03.358717 sshd[6151]: Accepted publickey for core from 10.200.16.10 port 43716 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:03.357000 audit[6151]: USER_ACCT pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.363131 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:03.361000 audit[6151]: CRED_ACQ pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.374032 kernel: audit: type=1101 audit(1768353243.357:806): pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.374104 kernel: audit: type=1103 audit(1768353243.361:807): pid=6151 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.379689 kernel: audit: type=1006 audit(1768353243.361:808): pid=6151 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:14:03.384509 systemd-logind[2463]: New session 14 of user core. Jan 14 01:14:03.361000 audit[6151]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc49013f0 a2=3 a3=0 items=0 ppid=1 pid=6151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:03.394022 kernel: audit: type=1300 audit(1768353243.361:808): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc49013f0 a2=3 a3=0 items=0 ppid=1 pid=6151 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:03.361000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:03.397727 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:14:03.397993 kernel: audit: type=1327 audit(1768353243.361:808): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:03.402000 audit[6151]: USER_START pid=6151 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.415009 kernel: audit: type=1105 audit(1768353243.402:809): pid=6151 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.404000 audit[6155]: CRED_ACQ pid=6155 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.426989 kernel: audit: type=1103 audit(1768353243.404:810): pid=6155 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.742012 sshd[6155]: Connection closed by 10.200.16.10 port 43716 Jan 14 01:14:03.744309 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:03.745000 audit[6151]: USER_END pid=6151 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.752432 systemd[1]: sshd@10-10.200.4.37:22-10.200.16.10:43716.service: Deactivated successfully. Jan 14 01:14:03.755997 kernel: audit: type=1106 audit(1768353243.745:811): pid=6151 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.746000 audit[6151]: CRED_DISP pid=6151 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.758714 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:14:03.763575 systemd-logind[2463]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:14:03.764115 kernel: audit: type=1104 audit(1768353243.746:812): pid=6151 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:03.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.37:22-10.200.16.10:43716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:03.764762 systemd-logind[2463]: Removed session 14. Jan 14 01:14:08.325002 kubelet[4014]: E0114 01:14:08.324703 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:14:08.862393 systemd[1]: Started sshd@11-10.200.4.37:22-10.200.16.10:43728.service - OpenSSH per-connection server daemon (10.200.16.10:43728). Jan 14 01:14:08.869502 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:08.869556 kernel: audit: type=1130 audit(1768353248.861:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.37:22-10.200.16.10:43728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:08.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.37:22-10.200.16.10:43728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:09.409000 audit[6199]: USER_ACCT pid=6199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.414783 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:09.415524 sshd[6199]: Accepted publickey for core from 10.200.16.10 port 43728 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:09.412000 audit[6199]: CRED_ACQ pid=6199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.418801 kernel: audit: type=1101 audit(1768353249.409:815): pid=6199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.418873 kernel: audit: type=1103 audit(1768353249.412:816): pid=6199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.422941 kernel: audit: type=1006 audit(1768353249.412:817): pid=6199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 01:14:09.412000 audit[6199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81fe3cb0 a2=3 a3=0 items=0 ppid=1 pid=6199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:09.427205 kernel: audit: type=1300 audit(1768353249.412:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81fe3cb0 a2=3 a3=0 items=0 ppid=1 pid=6199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:09.412000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:09.431229 systemd-logind[2463]: New session 15 of user core. Jan 14 01:14:09.432025 kernel: audit: type=1327 audit(1768353249.412:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:09.436139 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:14:09.437000 audit[6199]: USER_START pid=6199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.442000 audit[6203]: CRED_ACQ pid=6203 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.447342 kernel: audit: type=1105 audit(1768353249.437:818): pid=6199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.447402 kernel: audit: type=1103 audit(1768353249.442:819): pid=6203 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.833821 sshd[6203]: Connection closed by 10.200.16.10 port 43728 Jan 14 01:14:09.835231 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:09.837000 audit[6199]: USER_END pid=6199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.845997 kernel: audit: type=1106 audit(1768353249.837:820): pid=6199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.843628 systemd-logind[2463]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:14:09.844379 systemd[1]: sshd@11-10.200.4.37:22-10.200.16.10:43728.service: Deactivated successfully. Jan 14 01:14:09.847828 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:14:09.837000 audit[6199]: CRED_DISP pid=6199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.855157 systemd-logind[2463]: Removed session 15. Jan 14 01:14:09.857002 kernel: audit: type=1104 audit(1768353249.837:821): pid=6199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:09.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.37:22-10.200.16.10:43728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:11.320771 containerd[2498]: time="2026-01-14T01:14:11.320724302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:14:11.584543 containerd[2498]: time="2026-01-14T01:14:11.584416913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:14:11.587843 containerd[2498]: time="2026-01-14T01:14:11.587790561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:14:11.587953 containerd[2498]: time="2026-01-14T01:14:11.587892399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:14:11.588211 kubelet[4014]: E0114 01:14:11.588160 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:14:11.589172 kubelet[4014]: E0114 01:14:11.588534 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:14:11.589411 kubelet[4014]: E0114 01:14:11.589359 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkzbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rz5pp_calico-system(e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:14:11.591017 kubelet[4014]: E0114 01:14:11.590983 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:14:13.319184 containerd[2498]: time="2026-01-14T01:14:13.319101609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:14:13.585057 containerd[2498]: time="2026-01-14T01:14:13.584908188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:14:13.588363 containerd[2498]: time="2026-01-14T01:14:13.588324450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:14:13.588481 containerd[2498]: time="2026-01-14T01:14:13.588410436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:14:13.588608 kubelet[4014]: E0114 01:14:13.588575 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:14:13.588904 kubelet[4014]: E0114 01:14:13.588623 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:14:13.589471 kubelet[4014]: E0114 01:14:13.588925 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-955v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-cl2ls_calico-apiserver(8612edc9-7707-465b-bd59-44c1e0af599e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:14:13.589652 containerd[2498]: time="2026-01-14T01:14:13.589187204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:14:13.591066 kubelet[4014]: E0114 01:14:13.591002 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:14:13.846218 containerd[2498]: time="2026-01-14T01:14:13.846094552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:14:13.851301 containerd[2498]: time="2026-01-14T01:14:13.851268917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:14:13.851385 containerd[2498]: time="2026-01-14T01:14:13.851346351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:14:13.851559 kubelet[4014]: E0114 01:14:13.851530 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:14:13.851641 kubelet[4014]: E0114 01:14:13.851572 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:14:13.851999 kubelet[4014]: E0114 01:14:13.851722 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkpqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-fb6hx_calico-apiserver(7c277774-5617-4094-89b3-d4c788250cae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:14:13.853137 kubelet[4014]: E0114 01:14:13.853099 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:14:14.321016 containerd[2498]: time="2026-01-14T01:14:14.320955783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:14:14.583070 containerd[2498]: time="2026-01-14T01:14:14.582683390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:14:14.587701 containerd[2498]: time="2026-01-14T01:14:14.587654272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:14:14.587824 containerd[2498]: time="2026-01-14T01:14:14.587758604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:14:14.588996 kubelet[4014]: E0114 01:14:14.587903 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:14:14.588996 kubelet[4014]: E0114 01:14:14.587950 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:14:14.588996 kubelet[4014]: E0114 01:14:14.588096 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b08deebfb5504960b33ee107ab7f4f73,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:14:14.591917 containerd[2498]: time="2026-01-14T01:14:14.591886030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:14:14.868681 containerd[2498]: time="2026-01-14T01:14:14.868555022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:14:14.872552 containerd[2498]: time="2026-01-14T01:14:14.872423273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:14:14.872799 containerd[2498]: time="2026-01-14T01:14:14.872478065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:14:14.873174 kubelet[4014]: E0114 01:14:14.873112 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:14:14.873415 kubelet[4014]: E0114 01:14:14.873349 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:14:14.875208 kubelet[4014]: E0114 01:14:14.875153 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7xnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69869bddb6-9f5bh_calico-system(4aa450fa-397e-4bd9-b82d-45d9b129db7d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:14:14.877101 kubelet[4014]: E0114 01:14:14.877045 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:14:14.948808 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:14.948898 kernel: audit: type=1130 audit(1768353254.946:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.37:22-10.200.16.10:37376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:14.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.37:22-10.200.16.10:37376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:14.947265 systemd[1]: Started sshd@12-10.200.4.37:22-10.200.16.10:37376.service - OpenSSH per-connection server daemon (10.200.16.10:37376). Jan 14 01:14:15.318927 containerd[2498]: time="2026-01-14T01:14:15.318125348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:14:15.499000 audit[6230]: USER_ACCT pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.502000 audit[6230]: CRED_ACQ pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.504802 sshd-session[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:15.505959 sshd[6230]: Accepted publickey for core from 10.200.16.10 port 37376 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:15.506097 kernel: audit: type=1101 audit(1768353255.499:824): pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.506131 kernel: audit: type=1103 audit(1768353255.502:825): pid=6230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.502000 audit[6230]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5aaabff0 a2=3 a3=0 items=0 ppid=1 pid=6230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:15.525130 kernel: audit: type=1006 audit(1768353255.502:826): pid=6230 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:14:15.525199 kernel: audit: type=1300 audit(1768353255.502:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5aaabff0 a2=3 a3=0 items=0 ppid=1 pid=6230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:15.524829 systemd-logind[2463]: New session 16 of user core. Jan 14 01:14:15.529842 kernel: audit: type=1327 audit(1768353255.502:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:15.502000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:15.533441 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:14:15.535000 audit[6230]: USER_START pid=6230 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.543039 kernel: audit: type=1105 audit(1768353255.535:827): pid=6230 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.542000 audit[6234]: CRED_ACQ pid=6234 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.548021 kernel: audit: type=1103 audit(1768353255.542:828): pid=6234 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.586337 containerd[2498]: time="2026-01-14T01:14:15.586249020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:14:15.589249 containerd[2498]: time="2026-01-14T01:14:15.589211966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:14:15.589327 containerd[2498]: time="2026-01-14T01:14:15.589221522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:14:15.589562 kubelet[4014]: E0114 01:14:15.589520 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:14:15.589801 kubelet[4014]: E0114 01:14:15.589577 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:14:15.590029 kubelet[4014]: E0114 01:14:15.589992 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:14:15.591929 containerd[2498]: time="2026-01-14T01:14:15.591908837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:14:15.863395 containerd[2498]: time="2026-01-14T01:14:15.863180955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:14:15.866318 containerd[2498]: time="2026-01-14T01:14:15.866261419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:14:15.866518 containerd[2498]: time="2026-01-14T01:14:15.866427134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:14:15.866818 kubelet[4014]: E0114 01:14:15.866735 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:14:15.866818 kubelet[4014]: E0114 01:14:15.866800 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:14:15.867555 kubelet[4014]: E0114 01:14:15.867506 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:14:15.869033 kubelet[4014]: E0114 01:14:15.868986 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:14:15.889592 sshd[6234]: Connection closed by 10.200.16.10 port 37376 Jan 14 01:14:15.892105 sshd-session[6230]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:15.893000 audit[6230]: USER_END pid=6230 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.893000 audit[6230]: CRED_DISP pid=6230 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.904324 kernel: audit: type=1106 audit(1768353255.893:829): pid=6230 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.904386 kernel: audit: type=1104 audit(1768353255.893:830): pid=6230 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:15.904822 systemd[1]: sshd@12-10.200.4.37:22-10.200.16.10:37376.service: Deactivated successfully. Jan 14 01:14:15.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.37:22-10.200.16.10:37376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:15.908914 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:14:15.913435 systemd-logind[2463]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:14:15.914783 systemd-logind[2463]: Removed session 16. Jan 14 01:14:21.009147 systemd[1]: Started sshd@13-10.200.4.37:22-10.200.16.10:60570.service - OpenSSH per-connection server daemon (10.200.16.10:60570). Jan 14 01:14:21.018125 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:21.018210 kernel: audit: type=1130 audit(1768353261.008:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.37:22-10.200.16.10:60570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:21.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.37:22-10.200.16.10:60570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:21.584000 audit[6248]: USER_ACCT pid=6248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.586132 sshd[6248]: Accepted publickey for core from 10.200.16.10 port 60570 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:21.588654 sshd-session[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:21.593045 kernel: audit: type=1101 audit(1768353261.584:833): pid=6248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.586000 audit[6248]: CRED_ACQ pid=6248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.596883 systemd-logind[2463]: New session 17 of user core. Jan 14 01:14:21.604828 kernel: audit: type=1103 audit(1768353261.586:834): pid=6248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.604905 kernel: audit: type=1006 audit(1768353261.586:835): pid=6248 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:14:21.586000 audit[6248]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc4b32830 a2=3 a3=0 items=0 ppid=1 pid=6248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:21.607188 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:14:21.612152 kernel: audit: type=1300 audit(1768353261.586:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc4b32830 a2=3 a3=0 items=0 ppid=1 pid=6248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:21.586000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:21.616696 kernel: audit: type=1327 audit(1768353261.586:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:21.614000 audit[6248]: USER_START pid=6248 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.624111 kernel: audit: type=1105 audit(1768353261.614:836): pid=6248 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.620000 audit[6252]: CRED_ACQ pid=6252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.630764 kernel: audit: type=1103 audit(1768353261.620:837): pid=6252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.951096 sshd[6252]: Connection closed by 10.200.16.10 port 60570 Jan 14 01:14:21.953423 sshd-session[6248]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:21.953000 audit[6248]: USER_END pid=6248 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.960349 systemd[1]: sshd@13-10.200.4.37:22-10.200.16.10:60570.service: Deactivated successfully. Jan 14 01:14:21.963327 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:14:21.966024 kernel: audit: type=1106 audit(1768353261.953:838): pid=6248 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.953000 audit[6248]: CRED_DISP pid=6248 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:21.966447 systemd-logind[2463]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:14:21.967847 systemd-logind[2463]: Removed session 17. Jan 14 01:14:21.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.37:22-10.200.16.10:60570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:21.980996 kernel: audit: type=1104 audit(1768353261.953:839): pid=6248 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:22.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.37:22-10.200.16.10:60576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:22.067253 systemd[1]: Started sshd@14-10.200.4.37:22-10.200.16.10:60576.service - OpenSSH per-connection server daemon (10.200.16.10:60576). Jan 14 01:14:22.321001 kubelet[4014]: E0114 01:14:22.320868 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:14:22.627000 audit[6265]: USER_ACCT pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:22.630000 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 60576 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:22.630000 audit[6265]: CRED_ACQ pid=6265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:22.630000 audit[6265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf9319070 a2=3 a3=0 items=0 ppid=1 pid=6265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:22.630000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:22.632593 sshd-session[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:22.642196 systemd-logind[2463]: New session 18 of user core. Jan 14 01:14:22.648383 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:14:22.651000 audit[6265]: USER_START pid=6265 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:22.653000 audit[6269]: CRED_ACQ pid=6269 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:23.063025 sshd[6269]: Connection closed by 10.200.16.10 port 60576 Jan 14 01:14:23.064453 sshd-session[6265]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:23.064000 audit[6265]: USER_END pid=6265 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:23.065000 audit[6265]: CRED_DISP pid=6265 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:23.068564 systemd[1]: sshd@14-10.200.4.37:22-10.200.16.10:60576.service: Deactivated successfully. Jan 14 01:14:23.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.37:22-10.200.16.10:60576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:23.069175 systemd-logind[2463]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:14:23.070962 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:14:23.072966 systemd-logind[2463]: Removed session 18. Jan 14 01:14:23.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.37:22-10.200.16.10:60592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:23.175539 systemd[1]: Started sshd@15-10.200.4.37:22-10.200.16.10:60592.service - OpenSSH per-connection server daemon (10.200.16.10:60592). Jan 14 01:14:23.720000 audit[6281]: USER_ACCT pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:23.721627 sshd[6281]: Accepted publickey for core from 10.200.16.10 port 60592 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:23.721000 audit[6281]: CRED_ACQ pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:23.721000 audit[6281]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9a43bf10 a2=3 a3=0 items=0 ppid=1 pid=6281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:23.721000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:23.723678 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:23.733439 systemd-logind[2463]: New session 19 of user core. Jan 14 01:14:23.740179 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:14:23.742000 audit[6281]: USER_START pid=6281 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:23.745000 audit[6285]: CRED_ACQ pid=6285 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:24.130789 sshd[6285]: Connection closed by 10.200.16.10 port 60592 Jan 14 01:14:24.133195 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:24.133000 audit[6281]: USER_END pid=6281 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:24.134000 audit[6281]: CRED_DISP pid=6281 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:24.137890 systemd-logind[2463]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:14:24.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.37:22-10.200.16.10:60592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:24.140957 systemd[1]: sshd@15-10.200.4.37:22-10.200.16.10:60592.service: Deactivated successfully. Jan 14 01:14:24.144292 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:14:24.147187 systemd-logind[2463]: Removed session 19. Jan 14 01:14:24.322217 kubelet[4014]: E0114 01:14:24.322180 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:14:25.317838 kubelet[4014]: E0114 01:14:25.317793 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:14:27.319780 kubelet[4014]: E0114 01:14:27.319726 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:14:28.322233 kubelet[4014]: E0114 01:14:28.322157 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:14:29.271915 systemd[1]: Started sshd@16-10.200.4.37:22-10.200.16.10:60608.service - OpenSSH per-connection server daemon (10.200.16.10:60608). Jan 14 01:14:29.278340 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:14:29.278374 kernel: audit: type=1130 audit(1768353269.271:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.37:22-10.200.16.10:60608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:29.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.37:22-10.200.16.10:60608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:29.319312 kubelet[4014]: E0114 01:14:29.319273 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:14:29.832000 audit[6300]: USER_ACCT pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:29.834813 sshd[6300]: Accepted publickey for core from 10.200.16.10 port 60608 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:29.838317 sshd-session[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:29.836000 audit[6300]: CRED_ACQ pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:29.841242 kernel: audit: type=1101 audit(1768353269.832:860): pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:29.841310 kernel: audit: type=1103 audit(1768353269.836:861): pid=6300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:29.836000 audit[6300]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd43736970 a2=3 a3=0 items=0 ppid=1 pid=6300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:29.849395 kernel: audit: type=1006 audit(1768353269.836:862): pid=6300 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 01:14:29.849447 kernel: audit: type=1300 audit(1768353269.836:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd43736970 a2=3 a3=0 items=0 ppid=1 pid=6300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:29.850506 systemd-logind[2463]: New session 20 of user core. Jan 14 01:14:29.836000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:29.852681 kernel: audit: type=1327 audit(1768353269.836:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:29.857154 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:14:29.859000 audit[6300]: USER_START pid=6300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:29.863000 audit[6306]: CRED_ACQ pid=6306 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:29.868761 kernel: audit: type=1105 audit(1768353269.859:863): pid=6300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:29.868810 kernel: audit: type=1103 audit(1768353269.863:864): pid=6306 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:30.194645 sshd[6306]: Connection closed by 10.200.16.10 port 60608 Jan 14 01:14:30.194917 sshd-session[6300]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:30.195000 audit[6300]: USER_END pid=6300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:30.199818 systemd[1]: sshd@16-10.200.4.37:22-10.200.16.10:60608.service: Deactivated successfully. Jan 14 01:14:30.202306 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:14:30.205821 systemd-logind[2463]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:14:30.206995 kernel: audit: type=1106 audit(1768353270.195:865): pid=6300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:30.207075 systemd-logind[2463]: Removed session 20. Jan 14 01:14:30.213457 kernel: audit: type=1104 audit(1768353270.195:866): pid=6300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:30.195000 audit[6300]: CRED_DISP pid=6300 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:30.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.37:22-10.200.16.10:60608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:35.316578 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:35.316727 kernel: audit: type=1130 audit(1768353275.309:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.37:22-10.200.16.10:53356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:35.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.37:22-10.200.16.10:53356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:35.310264 systemd[1]: Started sshd@17-10.200.4.37:22-10.200.16.10:53356.service - OpenSSH per-connection server daemon (10.200.16.10:53356). Jan 14 01:14:35.322393 kubelet[4014]: E0114 01:14:35.321839 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:14:35.322393 kubelet[4014]: E0114 01:14:35.322069 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:14:35.870000 audit[6318]: USER_ACCT pid=6318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:35.872998 sshd[6318]: Accepted publickey for core from 10.200.16.10 port 53356 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:35.876211 sshd-session[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:35.877010 kernel: audit: type=1101 audit(1768353275.870:869): pid=6318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:35.873000 audit[6318]: CRED_ACQ pid=6318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:35.886636 kernel: audit: type=1103 audit(1768353275.873:870): pid=6318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:35.886713 kernel: audit: type=1006 audit(1768353275.873:871): pid=6318 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:14:35.873000 audit[6318]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e9a2f80 a2=3 a3=0 items=0 ppid=1 pid=6318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:35.894645 kernel: audit: type=1300 audit(1768353275.873:871): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4e9a2f80 a2=3 a3=0 items=0 ppid=1 pid=6318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:35.894708 kernel: audit: type=1327 audit(1768353275.873:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:35.873000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:35.895273 systemd-logind[2463]: New session 21 of user core. Jan 14 01:14:35.899243 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:14:35.901000 audit[6318]: USER_START pid=6318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:35.906000 audit[6347]: CRED_ACQ pid=6347 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:35.911067 kernel: audit: type=1105 audit(1768353275.901:872): pid=6318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:35.911123 kernel: audit: type=1103 audit(1768353275.906:873): pid=6347 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:36.232144 sshd[6347]: Connection closed by 10.200.16.10 port 53356 Jan 14 01:14:36.233130 sshd-session[6318]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:36.233000 audit[6318]: USER_END pid=6318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:36.240437 systemd[1]: sshd@17-10.200.4.37:22-10.200.16.10:53356.service: Deactivated successfully. Jan 14 01:14:36.247369 kernel: audit: type=1106 audit(1768353276.233:874): pid=6318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:36.247448 kernel: audit: type=1104 audit(1768353276.233:875): pid=6318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:36.233000 audit[6318]: CRED_DISP pid=6318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:36.245239 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:14:36.247021 systemd-logind[2463]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:14:36.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.37:22-10.200.16.10:53356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:36.247925 systemd-logind[2463]: Removed session 21. Jan 14 01:14:36.318836 kubelet[4014]: E0114 01:14:36.318797 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:14:39.319156 kubelet[4014]: E0114 01:14:39.319087 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:14:41.320704 kubelet[4014]: E0114 01:14:41.320581 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:14:41.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.37:22-10.200.16.10:46992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:41.343541 systemd[1]: Started sshd@18-10.200.4.37:22-10.200.16.10:46992.service - OpenSSH per-connection server daemon (10.200.16.10:46992). Jan 14 01:14:41.352374 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:41.352479 kernel: audit: type=1130 audit(1768353281.342:877): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.37:22-10.200.16.10:46992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:41.890578 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 46992 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:41.889000 audit[6359]: USER_ACCT pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:41.893135 sshd-session[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:41.890000 audit[6359]: CRED_ACQ pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:41.905089 kernel: audit: type=1101 audit(1768353281.889:878): pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:41.905177 kernel: audit: type=1103 audit(1768353281.890:879): pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:41.911455 kernel: audit: type=1006 audit(1768353281.890:880): pid=6359 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:14:41.920960 kernel: audit: type=1300 audit(1768353281.890:880): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccfbca790 a2=3 a3=0 items=0 ppid=1 pid=6359 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:41.921044 kernel: audit: type=1327 audit(1768353281.890:880): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:41.890000 audit[6359]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccfbca790 a2=3 a3=0 items=0 ppid=1 pid=6359 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:41.890000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:41.923027 systemd-logind[2463]: New session 22 of user core. Jan 14 01:14:41.926262 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:14:41.928000 audit[6359]: USER_START pid=6359 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:41.937040 kernel: audit: type=1105 audit(1768353281.928:881): pid=6359 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:41.935000 audit[6363]: CRED_ACQ pid=6363 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:41.944005 kernel: audit: type=1103 audit(1768353281.935:882): pid=6363 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.269000 audit[6359]: USER_END pid=6359 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.269388 sshd-session[6359]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:42.277874 sshd[6363]: Connection closed by 10.200.16.10 port 46992 Jan 14 01:14:42.278009 kernel: audit: type=1106 audit(1768353282.269:883): pid=6359 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.277508 systemd-logind[2463]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:14:42.278180 systemd[1]: sshd@18-10.200.4.37:22-10.200.16.10:46992.service: Deactivated successfully. Jan 14 01:14:42.270000 audit[6359]: CRED_DISP pid=6359 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.283807 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:14:42.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.37:22-10.200.16.10:46992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:42.286009 kernel: audit: type=1104 audit(1768353282.270:884): pid=6359 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.286999 systemd-logind[2463]: Removed session 22. Jan 14 01:14:42.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.37:22-10.200.16.10:47008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:42.389068 systemd[1]: Started sshd@19-10.200.4.37:22-10.200.16.10:47008.service - OpenSSH per-connection server daemon (10.200.16.10:47008). Jan 14 01:14:42.939000 audit[6375]: USER_ACCT pid=6375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.940706 sshd[6375]: Accepted publickey for core from 10.200.16.10 port 47008 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:42.940000 audit[6375]: CRED_ACQ pid=6375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.940000 audit[6375]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc90131d0 a2=3 a3=0 items=0 ppid=1 pid=6375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:42.940000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:42.942398 sshd-session[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:42.946924 systemd-logind[2463]: New session 23 of user core. Jan 14 01:14:42.953160 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:14:42.955000 audit[6375]: USER_START pid=6375 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:42.958000 audit[6379]: CRED_ACQ pid=6379 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:43.489442 sshd[6379]: Connection closed by 10.200.16.10 port 47008 Jan 14 01:14:43.489724 sshd-session[6375]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:43.492000 audit[6375]: USER_END pid=6375 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:43.492000 audit[6375]: CRED_DISP pid=6375 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:43.496150 systemd[1]: sshd@19-10.200.4.37:22-10.200.16.10:47008.service: Deactivated successfully. Jan 14 01:14:43.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.37:22-10.200.16.10:47008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:43.499819 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:14:43.502035 systemd-logind[2463]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:14:43.503819 systemd-logind[2463]: Removed session 23. Jan 14 01:14:43.606508 systemd[1]: Started sshd@20-10.200.4.37:22-10.200.16.10:47022.service - OpenSSH per-connection server daemon (10.200.16.10:47022). Jan 14 01:14:43.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.37:22-10.200.16.10:47022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:44.161000 audit[6389]: USER_ACCT pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:44.162864 sshd[6389]: Accepted publickey for core from 10.200.16.10 port 47022 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:44.162000 audit[6389]: CRED_ACQ pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:44.162000 audit[6389]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2613a710 a2=3 a3=0 items=0 ppid=1 pid=6389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:44.162000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:44.164384 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:44.169188 systemd-logind[2463]: New session 24 of user core. Jan 14 01:14:44.172162 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:14:44.173000 audit[6389]: USER_START pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:44.175000 audit[6393]: CRED_ACQ pid=6393 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:44.320658 kubelet[4014]: E0114 01:14:44.320611 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:14:45.025000 audit[6403]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:14:45.025000 audit[6403]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe76292970 a2=0 a3=7ffe7629295c items=0 ppid=4120 pid=6403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:45.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:14:45.030000 audit[6403]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:14:45.030000 audit[6403]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe76292970 a2=0 a3=0 items=0 ppid=4120 pid=6403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:45.030000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:14:45.043000 audit[6405]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=6405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:14:45.043000 audit[6405]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe2b8a1390 a2=0 a3=7ffe2b8a137c items=0 ppid=4120 pid=6405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:45.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:14:45.048000 audit[6405]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6405 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:14:45.048000 audit[6405]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe2b8a1390 a2=0 a3=0 items=0 ppid=4120 pid=6405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:45.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:14:45.158862 sshd[6393]: Connection closed by 10.200.16.10 port 47022 Jan 14 01:14:45.161185 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:45.162000 audit[6389]: USER_END pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:45.162000 audit[6389]: CRED_DISP pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:45.166370 systemd[1]: sshd@20-10.200.4.37:22-10.200.16.10:47022.service: Deactivated successfully. Jan 14 01:14:45.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.37:22-10.200.16.10:47022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:45.169741 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:14:45.173206 systemd-logind[2463]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:14:45.174613 systemd-logind[2463]: Removed session 24. Jan 14 01:14:45.272881 systemd[1]: Started sshd@21-10.200.4.37:22-10.200.16.10:47032.service - OpenSSH per-connection server daemon (10.200.16.10:47032). Jan 14 01:14:45.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.37:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:45.827000 audit[6410]: USER_ACCT pid=6410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:45.830063 sshd[6410]: Accepted publickey for core from 10.200.16.10 port 47032 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:45.828000 audit[6410]: CRED_ACQ pid=6410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:45.829000 audit[6410]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb7ae4710 a2=3 a3=0 items=0 ppid=1 pid=6410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:45.829000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:45.831754 sshd-session[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:45.836702 systemd-logind[2463]: New session 25 of user core. Jan 14 01:14:45.841366 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:14:45.841000 audit[6410]: USER_START pid=6410 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:45.843000 audit[6414]: CRED_ACQ pid=6414 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:46.297677 sshd[6414]: Connection closed by 10.200.16.10 port 47032 Jan 14 01:14:46.299144 sshd-session[6410]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:46.299000 audit[6410]: USER_END pid=6410 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:46.299000 audit[6410]: CRED_DISP pid=6410 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:46.302795 systemd[1]: sshd@21-10.200.4.37:22-10.200.16.10:47032.service: Deactivated successfully. Jan 14 01:14:46.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.37:22-10.200.16.10:47032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:46.305689 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:14:46.307357 systemd-logind[2463]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:14:46.309442 systemd-logind[2463]: Removed session 25. Jan 14 01:14:46.321172 kubelet[4014]: E0114 01:14:46.320705 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:14:46.425704 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 14 01:14:46.425795 kernel: audit: type=1130 audit(1768353286.415:917): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.37:22-10.200.16.10:47044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:46.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.37:22-10.200.16.10:47044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:46.416129 systemd[1]: Started sshd@22-10.200.4.37:22-10.200.16.10:47044.service - OpenSSH per-connection server daemon (10.200.16.10:47044). Jan 14 01:14:46.977000 audit[6424]: USER_ACCT pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:46.985832 sshd[6424]: Accepted publickey for core from 10.200.16.10 port 47044 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:46.987020 kernel: audit: type=1101 audit(1768353286.977:918): pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:46.988102 sshd-session[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:46.986000 audit[6424]: CRED_ACQ pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:46.999385 kernel: audit: type=1103 audit(1768353286.986:919): pid=6424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:46.999460 kernel: audit: type=1006 audit(1768353286.986:920): pid=6424 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 01:14:46.999127 systemd-logind[2463]: New session 26 of user core. Jan 14 01:14:46.986000 audit[6424]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff95521330 a2=3 a3=0 items=0 ppid=1 pid=6424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:47.010452 kernel: audit: type=1300 audit(1768353286.986:920): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff95521330 a2=3 a3=0 items=0 ppid=1 pid=6424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:47.010517 kernel: audit: type=1327 audit(1768353286.986:920): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:46.986000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:47.012356 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:14:47.016000 audit[6424]: USER_START pid=6424 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.032792 kernel: audit: type=1105 audit(1768353287.016:921): pid=6424 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.032862 kernel: audit: type=1103 audit(1768353287.016:922): pid=6428 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.016000 audit[6428]: CRED_ACQ pid=6428 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.319582 kubelet[4014]: E0114 01:14:47.319454 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:14:47.342037 sshd[6428]: Connection closed by 10.200.16.10 port 47044 Jan 14 01:14:47.343890 sshd-session[6424]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:47.344000 audit[6424]: USER_END pid=6424 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.348168 systemd[1]: sshd@22-10.200.4.37:22-10.200.16.10:47044.service: Deactivated successfully. Jan 14 01:14:47.350691 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:14:47.344000 audit[6424]: CRED_DISP pid=6424 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.355258 systemd-logind[2463]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:14:47.357697 systemd-logind[2463]: Removed session 26. Jan 14 01:14:47.359989 kernel: audit: type=1106 audit(1768353287.344:923): pid=6424 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.360047 kernel: audit: type=1104 audit(1768353287.344:924): pid=6424 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:47.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.37:22-10.200.16.10:47044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:49.318593 kubelet[4014]: E0114 01:14:49.318548 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:14:51.319710 kubelet[4014]: E0114 01:14:51.319654 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:14:52.452391 systemd[1]: Started sshd@23-10.200.4.37:22-10.200.16.10:33674.service - OpenSSH per-connection server daemon (10.200.16.10:33674). Jan 14 01:14:52.459818 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:52.459853 kernel: audit: type=1130 audit(1768353292.451:926): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.37:22-10.200.16.10:33674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:52.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.37:22-10.200.16.10:33674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:52.999000 audit[6440]: USER_ACCT pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.007726 kernel: audit: type=1101 audit(1768353292.999:927): pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.007810 sshd[6440]: Accepted publickey for core from 10.200.16.10 port 33674 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:53.009387 sshd-session[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:53.017062 kernel: audit: type=1103 audit(1768353293.007:928): pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.007000 audit[6440]: CRED_ACQ pid=6440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.022235 systemd-logind[2463]: New session 27 of user core. Jan 14 01:14:53.031827 kernel: audit: type=1006 audit(1768353293.007:929): pid=6440 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 01:14:53.031889 kernel: audit: type=1300 audit(1768353293.007:929): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdec6b41c0 a2=3 a3=0 items=0 ppid=1 pid=6440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:53.007000 audit[6440]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdec6b41c0 a2=3 a3=0 items=0 ppid=1 pid=6440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:53.032201 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:14:53.007000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:53.045539 kernel: audit: type=1327 audit(1768353293.007:929): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:53.045601 kernel: audit: type=1105 audit(1768353293.034:930): pid=6440 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.034000 audit[6440]: USER_START pid=6440 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.045000 audit[6444]: CRED_ACQ pid=6444 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.051996 kernel: audit: type=1103 audit(1768353293.045:931): pid=6444 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.382231 sshd[6444]: Connection closed by 10.200.16.10 port 33674 Jan 14 01:14:53.383153 sshd-session[6440]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:53.383000 audit[6440]: USER_END pid=6440 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.393191 kernel: audit: type=1106 audit(1768353293.383:932): pid=6440 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.393264 kernel: audit: type=1104 audit(1768353293.383:933): pid=6440 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.383000 audit[6440]: CRED_DISP pid=6440 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:53.392906 systemd[1]: sshd@23-10.200.4.37:22-10.200.16.10:33674.service: Deactivated successfully. Jan 14 01:14:53.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.37:22-10.200.16.10:33674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:53.400235 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:14:53.403025 systemd-logind[2463]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:14:53.404203 systemd-logind[2463]: Removed session 27. Jan 14 01:14:54.324305 kubelet[4014]: E0114 01:14:54.324256 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:14:55.318581 kubelet[4014]: E0114 01:14:55.318538 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:14:58.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.37:22-10.200.16.10:33680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:58.496167 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:14:58.496204 kernel: audit: type=1130 audit(1768353298.493:935): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.37:22-10.200.16.10:33680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:14:58.494363 systemd[1]: Started sshd@24-10.200.4.37:22-10.200.16.10:33680.service - OpenSSH per-connection server daemon (10.200.16.10:33680). Jan 14 01:14:59.050000 audit[6459]: USER_ACCT pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.051876 sshd[6459]: Accepted publickey for core from 10.200.16.10 port 33680 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:14:59.054404 sshd-session[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:14:59.059003 kernel: audit: type=1101 audit(1768353299.050:936): pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.050000 audit[6459]: CRED_ACQ pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.064247 systemd-logind[2463]: New session 28 of user core. Jan 14 01:14:59.071045 kernel: audit: type=1103 audit(1768353299.050:937): pid=6459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.071117 kernel: audit: type=1006 audit(1768353299.050:938): pid=6459 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 01:14:59.050000 audit[6459]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd95da940 a2=3 a3=0 items=0 ppid=1 pid=6459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:59.079527 kernel: audit: type=1300 audit(1768353299.050:938): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd95da940 a2=3 a3=0 items=0 ppid=1 pid=6459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:14:59.082002 kernel: audit: type=1327 audit(1768353299.050:938): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:59.050000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:14:59.082383 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 01:14:59.087000 audit[6459]: USER_START pid=6459 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.096291 kernel: audit: type=1105 audit(1768353299.087:939): pid=6459 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.095000 audit[6463]: CRED_ACQ pid=6463 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.102997 kernel: audit: type=1103 audit(1768353299.095:940): pid=6463 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.320027 kubelet[4014]: E0114 01:14:59.318824 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:14:59.321241 kubelet[4014]: E0114 01:14:59.320630 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:14:59.459005 sshd[6463]: Connection closed by 10.200.16.10 port 33680 Jan 14 01:14:59.459527 sshd-session[6459]: pam_unix(sshd:session): session closed for user core Jan 14 01:14:59.460000 audit[6459]: USER_END pid=6459 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.465040 systemd-logind[2463]: Session 28 logged out. Waiting for processes to exit. Jan 14 01:14:59.467206 systemd[1]: sshd@24-10.200.4.37:22-10.200.16.10:33680.service: Deactivated successfully. Jan 14 01:14:59.470253 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 01:14:59.473615 systemd-logind[2463]: Removed session 28. Jan 14 01:14:59.481987 kernel: audit: type=1106 audit(1768353299.460:941): pid=6459 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.460000 audit[6459]: CRED_DISP pid=6459 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.496994 kernel: audit: type=1104 audit(1768353299.460:942): pid=6459 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:14:59.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.4.37:22-10.200.16.10:33680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:03.320914 kubelet[4014]: E0114 01:15:03.320826 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:15:04.322166 kubelet[4014]: E0114 01:15:04.322125 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:15:04.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.37:22-10.200.16.10:39866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:04.576289 systemd[1]: Started sshd@25-10.200.4.37:22-10.200.16.10:39866.service - OpenSSH per-connection server daemon (10.200.16.10:39866). Jan 14 01:15:04.579547 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:15:04.579699 kernel: audit: type=1130 audit(1768353304.575:944): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.37:22-10.200.16.10:39866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:05.128000 audit[6477]: USER_ACCT pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.132066 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:15:05.133499 sshd[6477]: Accepted publickey for core from 10.200.16.10 port 39866 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:15:05.128000 audit[6477]: CRED_ACQ pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.143986 kernel: audit: type=1101 audit(1768353305.128:945): pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.144058 kernel: audit: type=1103 audit(1768353305.128:946): pid=6477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.143126 systemd-logind[2463]: New session 29 of user core. Jan 14 01:15:05.128000 audit[6477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe90b25510 a2=3 a3=0 items=0 ppid=1 pid=6477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:05.148309 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 01:15:05.156230 kernel: audit: type=1006 audit(1768353305.128:947): pid=6477 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 01:15:05.156283 kernel: audit: type=1300 audit(1768353305.128:947): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe90b25510 a2=3 a3=0 items=0 ppid=1 pid=6477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:05.128000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:05.160991 kernel: audit: type=1327 audit(1768353305.128:947): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:05.152000 audit[6477]: USER_START pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.171994 kernel: audit: type=1105 audit(1768353305.152:948): pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.156000 audit[6481]: CRED_ACQ pid=6481 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.180004 kernel: audit: type=1103 audit(1768353305.156:949): pid=6481 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.485406 sshd[6481]: Connection closed by 10.200.16.10 port 39866 Jan 14 01:15:05.486363 sshd-session[6477]: pam_unix(sshd:session): session closed for user core Jan 14 01:15:05.486000 audit[6477]: USER_END pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.494721 systemd[1]: sshd@25-10.200.4.37:22-10.200.16.10:39866.service: Deactivated successfully. Jan 14 01:15:05.494999 kernel: audit: type=1106 audit(1768353305.486:950): pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.487000 audit[6477]: CRED_DISP pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.498610 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 01:15:05.501020 kernel: audit: type=1104 audit(1768353305.487:951): pid=6477 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:05.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.4.37:22-10.200.16.10:39866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:05.501423 systemd-logind[2463]: Session 29 logged out. Waiting for processes to exit. Jan 14 01:15:05.503195 systemd-logind[2463]: Removed session 29. Jan 14 01:15:07.319508 kubelet[4014]: E0114 01:15:07.319178 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:15:08.319656 kubelet[4014]: E0114 01:15:08.319580 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:15:08.689000 audit[6517]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:15:08.689000 audit[6517]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeaff22cb0 a2=0 a3=7ffeaff22c9c items=0 ppid=4120 pid=6517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:08.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:15:08.695000 audit[6517]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6517 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:15:08.695000 audit[6517]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffeaff22cb0 a2=0 a3=7ffeaff22c9c items=0 ppid=4120 pid=6517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:08.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:15:10.600278 systemd[1]: Started sshd@26-10.200.4.37:22-10.200.16.10:42590.service - OpenSSH per-connection server daemon (10.200.16.10:42590). Jan 14 01:15:10.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.37:22-10.200.16.10:42590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:10.601787 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:15:10.601863 kernel: audit: type=1130 audit(1768353310.599:955): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.37:22-10.200.16.10:42590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:11.165000 audit[6519]: USER_ACCT pid=6519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.170896 sshd[6519]: Accepted publickey for core from 10.200.16.10 port 42590 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:15:11.172000 audit[6519]: CRED_ACQ pid=6519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.174143 sshd-session[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:15:11.179536 kernel: audit: type=1101 audit(1768353311.165:956): pid=6519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.179605 kernel: audit: type=1103 audit(1768353311.172:957): pid=6519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.185190 kernel: audit: type=1006 audit(1768353311.172:958): pid=6519 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 01:15:11.184833 systemd-logind[2463]: New session 30 of user core. Jan 14 01:15:11.172000 audit[6519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbe0c3240 a2=3 a3=0 items=0 ppid=1 pid=6519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:11.172000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:11.194452 kernel: audit: type=1300 audit(1768353311.172:958): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbe0c3240 a2=3 a3=0 items=0 ppid=1 pid=6519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:11.194502 kernel: audit: type=1327 audit(1768353311.172:958): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:11.195228 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 01:15:11.196000 audit[6519]: USER_START pid=6519 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.203000 audit[6524]: CRED_ACQ pid=6524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.207538 kernel: audit: type=1105 audit(1768353311.196:959): pid=6519 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.207605 kernel: audit: type=1103 audit(1768353311.203:960): pid=6524 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.527224 sshd[6524]: Connection closed by 10.200.16.10 port 42590 Jan 14 01:15:11.527420 sshd-session[6519]: pam_unix(sshd:session): session closed for user core Jan 14 01:15:11.528000 audit[6519]: USER_END pid=6519 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.535844 systemd[1]: sshd@26-10.200.4.37:22-10.200.16.10:42590.service: Deactivated successfully. Jan 14 01:15:11.537154 kernel: audit: type=1106 audit(1768353311.528:961): pid=6519 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.529000 audit[6519]: CRED_DISP pid=6519 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.540158 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 01:15:11.541294 systemd-logind[2463]: Session 30 logged out. Waiting for processes to exit. Jan 14 01:15:11.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.4.37:22-10.200.16.10:42590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:11.543026 kernel: audit: type=1104 audit(1768353311.529:962): pid=6519 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:11.543153 systemd-logind[2463]: Removed session 30. Jan 14 01:15:12.320941 kubelet[4014]: E0114 01:15:12.320898 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:15:13.317939 kubelet[4014]: E0114 01:15:13.317895 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:15:14.319801 kubelet[4014]: E0114 01:15:14.319667 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:15:16.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.37:22-10.200.16.10:42598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:16.645214 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:15:16.645263 kernel: audit: type=1130 audit(1768353316.642:964): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.37:22-10.200.16.10:42598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:16.643618 systemd[1]: Started sshd@27-10.200.4.37:22-10.200.16.10:42598.service - OpenSSH per-connection server daemon (10.200.16.10:42598). Jan 14 01:15:17.203547 sshd[6538]: Accepted publickey for core from 10.200.16.10 port 42598 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:15:17.202000 audit[6538]: USER_ACCT pid=6538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.205988 sshd-session[6538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:15:17.211988 kernel: audit: type=1101 audit(1768353317.202:965): pid=6538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.203000 audit[6538]: CRED_ACQ pid=6538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.226993 kernel: audit: type=1103 audit(1768353317.203:966): pid=6538 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.237056 systemd-logind[2463]: New session 31 of user core. Jan 14 01:15:17.240007 kernel: audit: type=1006 audit(1768353317.203:967): pid=6538 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 14 01:15:17.240069 kernel: audit: type=1300 audit(1768353317.203:967): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadb0a5b0 a2=3 a3=0 items=0 ppid=1 pid=6538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:17.203000 audit[6538]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadb0a5b0 a2=3 a3=0 items=0 ppid=1 pid=6538 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:17.239161 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 01:15:17.203000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:17.257439 kernel: audit: type=1327 audit(1768353317.203:967): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:17.247000 audit[6538]: USER_START pid=6538 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.269017 kernel: audit: type=1105 audit(1768353317.247:968): pid=6538 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.255000 audit[6548]: CRED_ACQ pid=6548 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.277007 kernel: audit: type=1103 audit(1768353317.255:969): pid=6548 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.595782 sshd[6548]: Connection closed by 10.200.16.10 port 42598 Jan 14 01:15:17.596431 sshd-session[6538]: pam_unix(sshd:session): session closed for user core Jan 14 01:15:17.596000 audit[6538]: USER_END pid=6538 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.600493 systemd-logind[2463]: Session 31 logged out. Waiting for processes to exit. Jan 14 01:15:17.602569 systemd[1]: sshd@27-10.200.4.37:22-10.200.16.10:42598.service: Deactivated successfully. Jan 14 01:15:17.605444 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 01:15:17.610364 kernel: audit: type=1106 audit(1768353317.596:970): pid=6538 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.610430 kernel: audit: type=1104 audit(1768353317.596:971): pid=6538 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.596000 audit[6538]: CRED_DISP pid=6538 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:17.607897 systemd-logind[2463]: Removed session 31. Jan 14 01:15:17.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.4.37:22-10.200.16.10:42598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:19.319440 kubelet[4014]: E0114 01:15:19.319400 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:15:20.320205 kubelet[4014]: E0114 01:15:20.320145 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:15:22.717478 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:15:22.717597 kernel: audit: type=1130 audit(1768353322.715:973): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.37:22-10.200.16.10:48178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:22.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.37:22-10.200.16.10:48178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:22.716002 systemd[1]: Started sshd@28-10.200.4.37:22-10.200.16.10:48178.service - OpenSSH per-connection server daemon (10.200.16.10:48178). Jan 14 01:15:23.269380 sshd[6569]: Accepted publickey for core from 10.200.16.10 port 48178 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:15:23.268000 audit[6569]: USER_ACCT pid=6569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.273090 sshd-session[6569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:15:23.279993 kernel: audit: type=1101 audit(1768353323.268:974): pid=6569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.271000 audit[6569]: CRED_ACQ pid=6569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.289011 kernel: audit: type=1103 audit(1768353323.271:975): pid=6569 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.291662 systemd-logind[2463]: New session 32 of user core. Jan 14 01:15:23.298001 kernel: audit: type=1006 audit(1768353323.271:976): pid=6569 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 14 01:15:23.271000 audit[6569]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5e5e21c0 a2=3 a3=0 items=0 ppid=1 pid=6569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:23.304872 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 01:15:23.305171 kernel: audit: type=1300 audit(1768353323.271:976): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5e5e21c0 a2=3 a3=0 items=0 ppid=1 pid=6569 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:23.271000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:23.311997 kernel: audit: type=1327 audit(1768353323.271:976): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:23.310000 audit[6569]: USER_START pid=6569 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.321218 kernel: audit: type=1105 audit(1768353323.310:977): pid=6569 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.321282 containerd[2498]: time="2026-01-14T01:15:23.319393233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:15:23.318000 audit[6573]: CRED_ACQ pid=6573 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.324078 kubelet[4014]: E0114 01:15:23.323124 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:15:23.327048 kernel: audit: type=1103 audit(1768353323.318:978): pid=6573 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.615178 containerd[2498]: time="2026-01-14T01:15:23.615127023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:15:23.618451 containerd[2498]: time="2026-01-14T01:15:23.618398980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:15:23.618571 containerd[2498]: time="2026-01-14T01:15:23.618503294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:15:23.620179 kubelet[4014]: E0114 01:15:23.620139 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:15:23.620285 kubelet[4014]: E0114 01:15:23.620195 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:15:23.620406 kubelet[4014]: E0114 01:15:23.620358 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znmt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c84b9c95c-shkh8_calico-system(486952cc-8944-4287-a101-bc04fbfa2173): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:15:23.621890 kubelet[4014]: E0114 01:15:23.621851 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:15:23.649834 sshd[6573]: Connection closed by 10.200.16.10 port 48178 Jan 14 01:15:23.649342 sshd-session[6569]: pam_unix(sshd:session): session closed for user core Jan 14 01:15:23.650000 audit[6569]: USER_END pid=6569 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.661005 kernel: audit: type=1106 audit(1768353323.650:979): pid=6569 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.660704 systemd[1]: sshd@28-10.200.4.37:22-10.200.16.10:48178.service: Deactivated successfully. Jan 14 01:15:23.650000 audit[6569]: CRED_DISP pid=6569 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.667446 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 01:15:23.669018 kernel: audit: type=1104 audit(1768353323.650:980): pid=6569 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:23.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.200.4.37:22-10.200.16.10:48178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:23.670375 systemd-logind[2463]: Session 32 logged out. Waiting for processes to exit. Jan 14 01:15:23.673335 systemd-logind[2463]: Removed session 32. Jan 14 01:15:25.319192 kubelet[4014]: E0114 01:15:25.319137 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-cl2ls" podUID="8612edc9-7707-465b-bd59-44c1e0af599e" Jan 14 01:15:28.764254 systemd[1]: Started sshd@29-10.200.4.37:22-10.200.16.10:48182.service - OpenSSH per-connection server daemon (10.200.16.10:48182). Jan 14 01:15:28.771934 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:15:28.772042 kernel: audit: type=1130 audit(1768353328.762:982): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.200.4.37:22-10.200.16.10:48182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:28.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.200.4.37:22-10.200.16.10:48182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:29.308000 audit[6588]: USER_ACCT pid=6588 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.310914 sshd[6588]: Accepted publickey for core from 10.200.16.10 port 48182 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:15:29.312245 sshd-session[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:15:29.308000 audit[6588]: CRED_ACQ pid=6588 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.320262 kubelet[4014]: E0114 01:15:29.320040 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69869bddb6-9f5bh" podUID="4aa450fa-397e-4bd9-b82d-45d9b129db7d" Jan 14 01:15:29.321415 kernel: audit: type=1101 audit(1768353329.308:983): pid=6588 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.321462 kernel: audit: type=1103 audit(1768353329.308:984): pid=6588 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.328008 kernel: audit: type=1006 audit(1768353329.308:985): pid=6588 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 14 01:15:29.325743 systemd-logind[2463]: New session 33 of user core. Jan 14 01:15:29.308000 audit[6588]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb7b194f0 a2=3 a3=0 items=0 ppid=1 pid=6588 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:29.308000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:29.335713 kernel: audit: type=1300 audit(1768353329.308:985): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb7b194f0 a2=3 a3=0 items=0 ppid=1 pid=6588 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:29.335763 kernel: audit: type=1327 audit(1768353329.308:985): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:29.336175 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 14 01:15:29.339000 audit[6588]: USER_START pid=6588 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.339000 audit[6594]: CRED_ACQ pid=6594 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.351620 kernel: audit: type=1105 audit(1768353329.339:986): pid=6588 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.351667 kernel: audit: type=1103 audit(1768353329.339:987): pid=6594 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.685264 sshd[6594]: Connection closed by 10.200.16.10 port 48182 Jan 14 01:15:29.687132 sshd-session[6588]: pam_unix(sshd:session): session closed for user core Jan 14 01:15:29.687000 audit[6588]: USER_END pid=6588 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.700004 kernel: audit: type=1106 audit(1768353329.687:988): pid=6588 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.703245 systemd[1]: sshd@29-10.200.4.37:22-10.200.16.10:48182.service: Deactivated successfully. Jan 14 01:15:29.687000 audit[6588]: CRED_DISP pid=6588 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.709299 systemd[1]: session-33.scope: Deactivated successfully. Jan 14 01:15:29.713005 kernel: audit: type=1104 audit(1768353329.687:989): pid=6588 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:29.713412 systemd-logind[2463]: Session 33 logged out. Waiting for processes to exit. Jan 14 01:15:29.717232 systemd-logind[2463]: Removed session 33. Jan 14 01:15:29.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.200.4.37:22-10.200.16.10:48182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:33.319872 containerd[2498]: time="2026-01-14T01:15:33.319825944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:15:33.600007 containerd[2498]: time="2026-01-14T01:15:33.599780353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:15:33.605325 containerd[2498]: time="2026-01-14T01:15:33.605276543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:15:33.605455 containerd[2498]: time="2026-01-14T01:15:33.605345835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:15:33.605999 kubelet[4014]: E0114 01:15:33.605659 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:15:33.605999 kubelet[4014]: E0114 01:15:33.605800 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:15:33.606836 kubelet[4014]: E0114 01:15:33.606398 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkzbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-rz5pp_calico-system(e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:15:33.607943 kubelet[4014]: E0114 01:15:33.607904 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-rz5pp" podUID="e0f0f92f-0f7f-41a7-be1b-6c10ab1af0c8" Jan 14 01:15:34.319747 containerd[2498]: time="2026-01-14T01:15:34.319680297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:15:34.580469 containerd[2498]: time="2026-01-14T01:15:34.580337874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:15:34.584673 containerd[2498]: time="2026-01-14T01:15:34.584624602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:15:34.584792 containerd[2498]: time="2026-01-14T01:15:34.584716777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:15:34.585351 kubelet[4014]: E0114 01:15:34.585309 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:15:34.585411 kubelet[4014]: E0114 01:15:34.585372 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:15:34.586400 kubelet[4014]: E0114 01:15:34.585848 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkpqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78d8c97c7f-fb6hx_calico-apiserver(7c277774-5617-4094-89b3-d4c788250cae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:15:34.587057 kubelet[4014]: E0114 01:15:34.587021 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78d8c97c7f-fb6hx" podUID="7c277774-5617-4094-89b3-d4c788250cae" Jan 14 01:15:34.800475 systemd[1]: Started sshd@30-10.200.4.37:22-10.200.16.10:47360.service - OpenSSH per-connection server daemon (10.200.16.10:47360). Jan 14 01:15:34.811236 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:15:34.811317 kernel: audit: type=1130 audit(1768353334.799:991): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.200.4.37:22-10.200.16.10:47360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:34.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.200.4.37:22-10.200.16.10:47360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:35.388000 audit[6605]: USER_ACCT pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.392108 sshd-session[6605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:15:35.393231 sshd[6605]: Accepted publickey for core from 10.200.16.10 port 47360 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:15:35.390000 audit[6605]: CRED_ACQ pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.400733 kernel: audit: type=1101 audit(1768353335.388:992): pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.400909 kernel: audit: type=1103 audit(1768353335.390:993): pid=6605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.406611 kernel: audit: type=1006 audit(1768353335.390:994): pid=6605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 14 01:15:35.406955 systemd-logind[2463]: New session 34 of user core. Jan 14 01:15:35.408753 kernel: audit: type=1300 audit(1768353335.390:994): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec8cead50 a2=3 a3=0 items=0 ppid=1 pid=6605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:35.390000 audit[6605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec8cead50 a2=3 a3=0 items=0 ppid=1 pid=6605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:15:35.409160 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 14 01:15:35.390000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:35.418998 kernel: audit: type=1327 audit(1768353335.390:994): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:15:35.416000 audit[6605]: USER_START pid=6605 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.423000 audit[6609]: CRED_ACQ pid=6609 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.432924 kernel: audit: type=1105 audit(1768353335.416:995): pid=6605 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.432996 kernel: audit: type=1103 audit(1768353335.423:996): pid=6609 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.753004 sshd[6609]: Connection closed by 10.200.16.10 port 47360 Jan 14 01:15:35.753155 sshd-session[6605]: pam_unix(sshd:session): session closed for user core Jan 14 01:15:35.756000 audit[6605]: USER_END pid=6605 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.768114 kernel: audit: type=1106 audit(1768353335.756:997): pid=6605 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.764831 systemd[1]: sshd@30-10.200.4.37:22-10.200.16.10:47360.service: Deactivated successfully. Jan 14 01:15:35.767336 systemd[1]: session-34.scope: Deactivated successfully. Jan 14 01:15:35.756000 audit[6605]: CRED_DISP pid=6605 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.770284 systemd-logind[2463]: Session 34 logged out. Waiting for processes to exit. Jan 14 01:15:35.775074 kernel: audit: type=1104 audit(1768353335.756:998): pid=6605 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:15:35.776004 systemd-logind[2463]: Removed session 34. Jan 14 01:15:35.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.200.4.37:22-10.200.16.10:47360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:15:37.318036 containerd[2498]: time="2026-01-14T01:15:37.317874156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:15:37.589285 containerd[2498]: time="2026-01-14T01:15:37.588114607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:15:37.591641 containerd[2498]: time="2026-01-14T01:15:37.591515222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:15:37.591641 containerd[2498]: time="2026-01-14T01:15:37.591610556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:15:37.591982 kubelet[4014]: E0114 01:15:37.591930 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:15:37.592996 kubelet[4014]: E0114 01:15:37.592317 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:15:37.592996 kubelet[4014]: E0114 01:15:37.592476 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:15:37.596830 containerd[2498]: time="2026-01-14T01:15:37.596617189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:15:37.864155 containerd[2498]: time="2026-01-14T01:15:37.864001918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:15:37.867307 containerd[2498]: time="2026-01-14T01:15:37.867171628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:15:37.867307 containerd[2498]: time="2026-01-14T01:15:37.867249237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:15:37.867643 kubelet[4014]: E0114 01:15:37.867554 4014 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:15:37.867643 kubelet[4014]: E0114 01:15:37.867601 4014 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:15:37.868145 kubelet[4014]: E0114 01:15:37.868099 4014 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdjpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-96vlw_calico-system(a24a17a9-73d6-4ce8-b8ef-5be32d60ba56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:15:37.869602 kubelet[4014]: E0114 01:15:37.869538 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-96vlw" podUID="a24a17a9-73d6-4ce8-b8ef-5be32d60ba56" Jan 14 01:15:38.320327 kubelet[4014]: E0114 01:15:38.319632 4014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" podUID="486952cc-8944-4287-a101-bc04fbfa2173" Jan 14 01:15:38.321180 containerd[2498]: time="2026-01-14T01:15:38.321145763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:15:38.359164 kubelet[4014]: E0114 01:15:38.359046 4014 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: EOF" event="&Event{ObjectMeta:{calico-kube-controllers-7c84b9c95c-shkh8.188a73d48e796856 calico-system 1748 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-7c84b9c95c-shkh8,UID:486952cc-8944-4287-a101-bc04fbfa2173,APIVersion:v1,ResourceVersion:835,FieldPath:spec.containers{calico-kube-controllers},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-4dd79cf71d,},FirstTimestamp:2026-01-14 01:12:40 +0000 UTC,LastTimestamp:2026-01-14 01:15:38.319561224 +0000 UTC m=+224.108185883,Count:12,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-4dd79cf71d,}" Jan 14 01:15:38.360538 kubelet[4014]: I0114 01:15:38.360487 4014 status_manager.go:919] "Failed to update status for pod" pod="calico-system/calico-kube-controllers-7c84b9c95c-shkh8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"486952cc-8944-4287-a101-bc04fbfa2173\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"image\\\":\\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"calico-kube-controllers\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"rpc error: code = NotFound desc = failed to pull and unpack image \\\\\\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\\\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\\\",\\\"reason\\\":\\\"ErrImagePull\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/certs\\\",\\\"name\\\":\\\"tigera-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/cert.pem\\\",\\\"name\\\":\\\"tigera-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-znmt9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"calico-system\"/\"calico-kube-controllers-7c84b9c95c-shkh8\": rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.4.37:54474->10.200.4.21:2379: read: connection reset by peer"