Jan 28 01:23:02.951943 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:22:24 -00 2026 Jan 28 01:23:02.951967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:23:02.951978 kernel: BIOS-provided physical RAM map: Jan 28 01:23:02.951985 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:23:02.951991 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 28 01:23:02.951997 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 28 01:23:02.952005 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 28 01:23:02.952012 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 28 01:23:02.952018 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 28 01:23:02.952026 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 28 01:23:02.952033 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 28 01:23:02.952039 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 28 01:23:02.952045 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 28 01:23:02.952051 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 28 01:23:02.952059 kernel: NX (Execute Disable) protection: active Jan 28 01:23:02.952068 kernel: APIC: Static calls initialized Jan 28 01:23:02.952074 kernel: efi: EFI v2.7 by Microsoft Jan 28 01:23:02.952082 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eac1298 RNG=0x3ffd2018 Jan 28 01:23:02.952088 kernel: random: crng init done Jan 28 01:23:02.952095 kernel: secureboot: Secure boot disabled Jan 28 01:23:02.952102 kernel: SMBIOS 3.1.0 present. Jan 28 01:23:02.952109 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 28 01:23:02.952116 kernel: DMI: Memory slots populated: 2/2 Jan 28 01:23:02.952122 kernel: Hypervisor detected: Microsoft Hyper-V Jan 28 01:23:02.952129 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 28 01:23:02.952138 kernel: Hyper-V: Nested features: 0x3e0101 Jan 28 01:23:02.952144 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 28 01:23:02.952151 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 28 01:23:02.952158 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 28 01:23:02.952164 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 28 01:23:02.952171 kernel: tsc: Detected 2300.001 MHz processor Jan 28 01:23:02.952178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:23:02.952185 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:23:02.952193 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 28 01:23:02.952202 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:23:02.952210 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:23:02.952226 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 28 01:23:02.952234 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 28 01:23:02.952242 kernel: Using GB pages for direct mapping Jan 28 01:23:02.952250 kernel: ACPI: Early table checksum verification disabled Jan 28 01:23:02.952262 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 28 01:23:02.952270 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:02.952278 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:02.952286 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 01:23:02.952293 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 28 01:23:02.952302 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:02.952311 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:02.952318 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:02.952326 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 28 01:23:02.952333 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 28 01:23:02.952341 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 01:23:02.952349 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 28 01:23:02.952359 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 28 01:23:02.952367 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 28 01:23:02.952375 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 28 01:23:02.952383 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 28 01:23:02.952390 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 28 01:23:02.952398 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 28 01:23:02.952406 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 28 01:23:02.952415 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 28 01:23:02.952423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 28 01:23:02.952431 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 28 01:23:02.952439 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 28 01:23:02.952447 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 28 01:23:02.952455 kernel: Zone ranges: Jan 28 01:23:02.952463 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:23:02.952472 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 28 01:23:02.952480 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 28 01:23:02.952488 kernel: Device empty Jan 28 01:23:02.952496 kernel: Movable zone start for each node Jan 28 01:23:02.952503 kernel: Early memory node ranges Jan 28 01:23:02.952511 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:23:02.952519 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 28 01:23:02.952529 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 28 01:23:02.952536 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 28 01:23:02.952544 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 28 01:23:02.952552 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 28 01:23:02.952560 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:23:02.952568 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:23:02.952576 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 28 01:23:02.952585 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 28 01:23:02.952594 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 28 01:23:02.952603 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 28 01:23:02.952611 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:23:02.952619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:23:02.952626 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:23:02.952634 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 28 01:23:02.952643 kernel: TSC deadline timer available Jan 28 01:23:02.952652 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:23:02.952660 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:23:02.952668 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:23:02.952676 kernel: CPU topo: Max. threads per core: 2 Jan 28 01:23:02.952684 kernel: CPU topo: Num. cores per package: 1 Jan 28 01:23:02.952692 kernel: CPU topo: Num. threads per package: 2 Jan 28 01:23:02.952700 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 28 01:23:02.952709 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 28 01:23:02.952717 kernel: Booting paravirtualized kernel on Hyper-V Jan 28 01:23:02.952726 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:23:02.952732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 28 01:23:02.952740 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 28 01:23:02.952748 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 28 01:23:02.952755 kernel: pcpu-alloc: [0] 0 1 Jan 28 01:23:02.952764 kernel: Hyper-V: PV spinlocks enabled Jan 28 01:23:02.952772 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:23:02.952781 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:23:02.952790 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 28 01:23:02.952798 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:23:02.952807 kernel: Fallback order for Node 0: 0 Jan 28 01:23:02.952816 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 28 01:23:02.952824 kernel: Policy zone: Normal Jan 28 01:23:02.952832 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:23:02.952840 kernel: software IO TLB: area num 2. Jan 28 01:23:02.952848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 01:23:02.952874 kernel: ftrace: allocating 40128 entries in 157 pages Jan 28 01:23:02.952883 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:23:02.952891 kernel: Dynamic Preempt: voluntary Jan 28 01:23:02.952901 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:23:02.952910 kernel: rcu: RCU event tracing is enabled. Jan 28 01:23:02.952925 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 01:23:02.952935 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:23:02.952943 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:23:02.952952 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:23:02.952960 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:23:02.952968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 01:23:02.952975 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:23:02.952985 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:23:02.952993 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 01:23:02.953001 kernel: Using NULL legacy PIC Jan 28 01:23:02.953009 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 28 01:23:02.953019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:23:02.953027 kernel: Console: colour dummy device 80x25 Jan 28 01:23:02.953035 kernel: printk: legacy console [tty1] enabled Jan 28 01:23:02.953043 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:23:02.953052 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 28 01:23:02.953060 kernel: ACPI: Core revision 20240827 Jan 28 01:23:02.953069 kernel: Failed to register legacy timer interrupt Jan 28 01:23:02.953079 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:23:02.953088 kernel: x2apic enabled Jan 28 01:23:02.953097 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:23:02.953105 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 28 01:23:02.953114 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 01:23:02.953122 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 28 01:23:02.953132 kernel: Hyper-V: Using IPI hypercalls Jan 28 01:23:02.953142 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 28 01:23:02.953150 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 28 01:23:02.953159 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 28 01:23:02.953168 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 28 01:23:02.953176 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 28 01:23:02.953184 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 28 01:23:02.953193 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Jan 28 01:23:02.953203 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300001) Jan 28 01:23:02.953211 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:23:02.953219 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 28 01:23:02.953228 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 28 01:23:02.953236 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:23:02.953243 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:23:02.953250 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:23:02.953259 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 28 01:23:02.953268 kernel: RETBleed: Vulnerable Jan 28 01:23:02.953276 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:23:02.953283 kernel: active return thunk: its_return_thunk Jan 28 01:23:02.953291 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 01:23:02.953298 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:23:02.953305 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:23:02.953313 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:23:02.953321 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 28 01:23:02.953329 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 28 01:23:02.953337 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 28 01:23:02.953347 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 28 01:23:02.953355 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 28 01:23:02.953362 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 28 01:23:02.953369 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:23:02.953376 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 28 01:23:02.953384 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 28 01:23:02.953391 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 28 01:23:02.953398 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 28 01:23:02.953405 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 28 01:23:02.953412 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 28 01:23:02.953418 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 28 01:23:02.953427 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:23:02.953435 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:23:02.953441 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:23:02.953448 kernel: landlock: Up and running. Jan 28 01:23:02.953455 kernel: SELinux: Initializing. Jan 28 01:23:02.953462 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 01:23:02.953470 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 01:23:02.953478 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 28 01:23:02.953636 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 28 01:23:02.953644 kernel: signal: max sigframe size: 11952 Jan 28 01:23:02.953704 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:23:02.953714 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:23:02.953723 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:23:02.953777 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:23:02.953785 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:23:02.953840 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:23:02.953849 kernel: .... node #0, CPUs: #1 Jan 28 01:23:02.953926 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 01:23:02.953986 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 28 01:23:02.953995 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 283604K reserved, 0K cma-reserved) Jan 28 01:23:02.954049 kernel: devtmpfs: initialized Jan 28 01:23:02.954058 kernel: x86/mm: Memory block size: 128MB Jan 28 01:23:02.954111 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 28 01:23:02.954120 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:23:02.954128 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 01:23:02.954184 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:23:02.954193 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:23:02.954255 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:23:02.954273 kernel: audit: type=2000 audit(1769563377.088:1): state=initialized audit_enabled=0 res=1 Jan 28 01:23:02.954283 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:23:02.954294 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:23:02.954302 kernel: cpuidle: using governor menu Jan 28 01:23:02.954314 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:23:02.954323 kernel: dca service started, version 1.12.1 Jan 28 01:23:02.954332 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 28 01:23:02.954341 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 28 01:23:02.954351 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:23:02.954361 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:23:02.954371 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:23:02.954383 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:23:02.954393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:23:02.954403 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:23:02.954411 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:23:02.954421 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:23:02.954430 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:23:02.954439 kernel: ACPI: Interpreter enabled Jan 28 01:23:02.954451 kernel: ACPI: PM: (supports S0 S5) Jan 28 01:23:02.954460 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:23:02.954468 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:23:02.954477 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 28 01:23:02.954485 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 28 01:23:02.954494 kernel: iommu: Default domain type: Translated Jan 28 01:23:02.954503 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:23:02.954517 kernel: efivars: Registered efivars operations Jan 28 01:23:02.954526 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:23:02.954534 kernel: PCI: System does not support PCI Jan 28 01:23:02.954542 kernel: vgaarb: loaded Jan 28 01:23:02.954551 kernel: clocksource: Switched to clocksource tsc-early Jan 28 01:23:02.954559 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:23:02.954568 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:23:02.954581 kernel: pnp: PnP ACPI init Jan 28 01:23:02.954590 kernel: pnp: PnP ACPI: found 3 devices Jan 28 01:23:02.954600 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:23:02.954609 kernel: NET: Registered PF_INET protocol family Jan 28 01:23:02.954618 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 01:23:02.954628 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 28 01:23:02.954637 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:23:02.954650 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:23:02.954660 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 28 01:23:02.954669 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 28 01:23:02.954678 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 28 01:23:02.954686 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 28 01:23:02.954694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:23:02.954703 kernel: NET: Registered PF_XDP protocol family Jan 28 01:23:02.954713 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:23:02.954721 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 28 01:23:02.954729 kernel: software IO TLB: mapped [mem 0x000000003a99d000-0x000000003e99d000] (64MB) Jan 28 01:23:02.954737 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 28 01:23:02.954745 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 28 01:23:02.956268 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735f0517, max_idle_ns: 440795237604 ns Jan 28 01:23:02.956285 kernel: clocksource: Switched to clocksource tsc Jan 28 01:23:02.956326 kernel: Initialise system trusted keyrings Jan 28 01:23:02.956332 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 28 01:23:02.956338 kernel: Key type asymmetric registered Jan 28 01:23:02.956344 kernel: Asymmetric key parser 'x509' registered Jan 28 01:23:02.956349 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:23:02.956355 kernel: io scheduler mq-deadline registered Jan 28 01:23:02.956360 kernel: io scheduler kyber registered Jan 28 01:23:02.956366 kernel: io scheduler bfq registered Jan 28 01:23:02.956371 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:23:02.956377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:23:02.956382 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:23:02.956388 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 28 01:23:02.956393 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:23:02.956398 kernel: i8042: PNP: No PS/2 controller found. Jan 28 01:23:02.956518 kernel: rtc_cmos 00:02: registered as rtc0 Jan 28 01:23:02.956583 kernel: rtc_cmos 00:02: setting system clock to 2026-01-28T01:22:59 UTC (1769563379) Jan 28 01:23:02.956644 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 28 01:23:02.956651 kernel: intel_pstate: Intel P-state driver initializing Jan 28 01:23:02.956656 kernel: efifb: probing for efifb Jan 28 01:23:02.956661 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 01:23:02.956668 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 01:23:02.956674 kernel: efifb: scrolling: redraw Jan 28 01:23:02.956679 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:23:02.956684 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:23:02.956689 kernel: fb0: EFI VGA frame buffer device Jan 28 01:23:02.956694 kernel: pstore: Using crash dump compression: deflate Jan 28 01:23:02.956700 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:23:02.956705 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:23:02.956711 kernel: Segment Routing with IPv6 Jan 28 01:23:02.956717 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:23:02.956722 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:23:02.956727 kernel: Key type dns_resolver registered Jan 28 01:23:02.956732 kernel: IPI shorthand broadcast: enabled Jan 28 01:23:02.956737 kernel: sched_clock: Marking stable (1850227581, 98181448)->(2253347264, -304938235) Jan 28 01:23:02.956742 kernel: registered taskstats version 1 Jan 28 01:23:02.956749 kernel: Loading compiled-in X.509 certificates Jan 28 01:23:02.956754 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 0eb3c2aae9988d4ab7f0e142c4f5c61453c9ddb3' Jan 28 01:23:02.956759 kernel: Demotion targets for Node 0: null Jan 28 01:23:02.956764 kernel: Key type .fscrypt registered Jan 28 01:23:02.956770 kernel: Key type fscrypt-provisioning registered Jan 28 01:23:02.956775 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:23:02.956780 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:23:02.956786 kernel: ima: No architecture policies found Jan 28 01:23:02.956791 kernel: clk: Disabling unused clocks Jan 28 01:23:02.956797 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 28 01:23:02.956802 kernel: Write protecting the kernel read-only data: 47104k Jan 28 01:23:02.956807 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 28 01:23:02.956812 kernel: Run /init as init process Jan 28 01:23:02.956817 kernel: with arguments: Jan 28 01:23:02.956824 kernel: /init Jan 28 01:23:02.956829 kernel: with environment: Jan 28 01:23:02.956834 kernel: HOME=/ Jan 28 01:23:02.956843 kernel: TERM=linux Jan 28 01:23:02.956848 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 01:23:02.956853 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 01:23:02.956871 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 01:23:02.956880 kernel: PTP clock support registered Jan 28 01:23:02.956901 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 01:23:02.956907 kernel: SCSI subsystem initialized Jan 28 01:23:02.956912 kernel: hv_vmbus: registering driver hv_utils Jan 28 01:23:02.956917 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 01:23:02.956922 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 01:23:02.956927 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 01:23:02.956933 kernel: hv_vmbus: registering driver hv_pci Jan 28 01:23:02.957027 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 28 01:23:02.957097 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 28 01:23:02.957183 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 28 01:23:02.957257 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:23:02.957346 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 28 01:23:02.957422 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 28 01:23:02.957492 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:23:02.957565 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 28 01:23:02.957572 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 01:23:02.957651 kernel: scsi host0: storvsc_host_t Jan 28 01:23:02.957735 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 01:23:02.957742 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 01:23:02.957747 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 01:23:02.957752 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 28 01:23:02.957827 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 01:23:02.957834 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 01:23:02.957840 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 28 01:23:02.957936 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 28 01:23:02.958043 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 28 01:23:02.958124 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 28 01:23:02.958136 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 28 01:23:02.958245 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 01:23:02.958256 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:23:02.961062 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 01:23:02.961080 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:23:02.961137 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:23:02.961146 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:23:02.961154 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 28 01:23:02.961274 kernel: raid6: avx512x4 gen() 48618 MB/s Jan 28 01:23:02.961286 kernel: raid6: avx512x2 gen() 47229 MB/s Jan 28 01:23:02.961342 kernel: raid6: avx512x1 gen() 30004 MB/s Jan 28 01:23:02.961351 kernel: raid6: avx2x4 gen() 38809 MB/s Jan 28 01:23:02.961406 kernel: raid6: avx2x2 gen() 37195 MB/s Jan 28 01:23:02.961414 kernel: raid6: avx2x1 gen() 31492 MB/s Jan 28 01:23:02.961423 kernel: raid6: using algorithm avx512x4 gen() 48618 MB/s Jan 28 01:23:02.961480 kernel: raid6: .... xor() 7802 MB/s, rmw enabled Jan 28 01:23:02.961488 kernel: raid6: using avx512x2 recovery algorithm Jan 28 01:23:02.961496 kernel: xor: automatically using best checksumming function avx Jan 28 01:23:02.961504 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:23:02.961562 kernel: BTRFS: device fsid 0f5fa021-4357-40bb-b32a-e1579c5824ad devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (974) Jan 28 01:23:02.961572 kernel: BTRFS info (device dm-0): first mount of filesystem 0f5fa021-4357-40bb-b32a-e1579c5824ad Jan 28 01:23:02.961580 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:23:02.961592 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 28 01:23:02.961650 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:23:02.961659 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:23:02.961667 kernel: loop: module loaded Jan 28 01:23:02.961721 kernel: loop0: detected capacity change from 0 to 100552 Jan 28 01:23:02.961731 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:23:02.961740 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:23:02.961804 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:23:02.961815 systemd[1]: Detected virtualization microsoft. Jan 28 01:23:02.961823 systemd[1]: Detected architecture x86-64. Jan 28 01:23:02.961959 systemd[1]: Running in initrd. Jan 28 01:23:02.961971 systemd[1]: No hostname configured, using default hostname. Jan 28 01:23:02.961982 systemd[1]: Hostname set to . Jan 28 01:23:02.961997 systemd[1]: Initializing machine ID from random generator. Jan 28 01:23:02.962008 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:23:02.962018 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:23:02.962027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:23:02.962041 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:23:02.962054 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:23:02.962069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:23:02.962081 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:23:02.962091 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:23:02.962102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:23:02.962115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:23:02.962126 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:23:02.962136 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:23:02.962147 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:23:02.962158 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:23:02.962168 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:23:02.962183 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:23:02.962196 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:23:02.962207 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:23:02.962216 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:23:02.962227 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:23:02.962236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:23:02.962247 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:23:02.962260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:23:02.962271 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:23:02.962283 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:23:02.962292 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:23:02.962302 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:23:02.962313 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:23:02.962325 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:23:02.962340 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:23:02.962350 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:23:02.962360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:23:02.962369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:02.962384 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:23:02.962395 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:23:02.962404 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:23:02.962416 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:23:02.962428 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:23:02.962439 kernel: Bridge firewalling registered Jan 28 01:23:02.962451 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:23:02.962462 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:23:02.962491 systemd-journald[1111]: Collecting audit messages is enabled. Jan 28 01:23:02.962520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:02.962533 kernel: audit: type=1130 audit(1769563382.957:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.962544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:23:02.962556 systemd-journald[1111]: Journal started Jan 28 01:23:02.962578 systemd-journald[1111]: Runtime Journal (/run/log/journal/1f5cfda5070d4bf29a4e9bae53ee52c8) is 8M, max 158.5M, 150.5M free. Jan 28 01:23:02.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.912366 systemd-modules-load[1112]: Inserted module 'br_netfilter' Jan 28 01:23:02.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.969866 kernel: audit: type=1130 audit(1769563382.966:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.969885 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:23:02.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.977936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:23:02.986960 kernel: audit: type=1130 audit(1769563382.974:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.986980 kernel: audit: type=1130 audit(1769563382.980:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:02.986969 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:23:02.989000 audit: BPF prog-id=6 op=LOAD Jan 28 01:23:02.992276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:23:02.997955 kernel: audit: type=1334 audit(1769563382.989:6): prog-id=6 op=LOAD Jan 28 01:23:02.995948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:23:03.004996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:23:03.009247 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:23:03.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.018878 kernel: audit: type=1130 audit(1769563383.008:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.030672 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:03.031180 systemd-tmpfiles[1135]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:23:03.041467 kernel: audit: type=1130 audit(1769563383.033:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.040390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:23:03.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.050871 kernel: audit: type=1130 audit(1769563383.046:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.097547 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:23:03.106211 systemd-resolved[1131]: Positive Trust Anchors: Jan 28 01:23:03.106224 systemd-resolved[1131]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:23:03.106227 systemd-resolved[1131]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:23:03.106254 systemd-resolved[1131]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:23:03.151249 dracut-cmdline[1149]: dracut-109 Jan 28 01:23:03.151249 dracut-cmdline[1149]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=71544b7bf64a92b2aba342c16b083723a12bedf106d3ddb24ccb63046196f1b3 Jan 28 01:23:03.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.155222 systemd-resolved[1131]: Defaulting to hostname 'linux'. Jan 28 01:23:03.171055 kernel: audit: type=1130 audit(1769563383.162:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.155892 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:23:03.163869 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:23:03.263873 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:23:03.323878 kernel: iscsi: registered transport (tcp) Jan 28 01:23:03.373176 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:23:03.373224 kernel: QLogic iSCSI HBA Driver Jan 28 01:23:03.418867 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:23:03.435932 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:23:03.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.436819 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:23:03.444315 kernel: audit: type=1130 audit(1769563383.435:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.472916 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:23:03.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.478024 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:23:03.480892 kernel: audit: type=1130 audit(1769563383.474:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.481389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:23:03.502504 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:23:03.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.509510 kernel: audit: type=1130 audit(1769563383.502:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.509006 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:23:03.514196 kernel: audit: type=1334 audit(1769563383.502:14): prog-id=7 op=LOAD Jan 28 01:23:03.514217 kernel: audit: type=1334 audit(1769563383.502:15): prog-id=8 op=LOAD Jan 28 01:23:03.502000 audit: BPF prog-id=7 op=LOAD Jan 28 01:23:03.502000 audit: BPF prog-id=8 op=LOAD Jan 28 01:23:03.536759 systemd-udevd[1374]: Using default interface naming scheme 'v257'. Jan 28 01:23:03.547788 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:23:03.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.555137 kernel: audit: type=1130 audit(1769563383.550:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.554885 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:23:03.569273 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:23:03.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.577874 kernel: audit: type=1130 audit(1769563383.572:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.588000 audit: BPF prog-id=9 op=LOAD Jan 28 01:23:03.591235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:23:03.592942 kernel: audit: type=1334 audit(1769563383.588:18): prog-id=9 op=LOAD Jan 28 01:23:03.602748 dracut-pre-trigger[1459]: rd.md=0: removing MD RAID activation Jan 28 01:23:03.625446 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:23:03.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.634058 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:23:03.638848 kernel: audit: type=1130 audit(1769563383.629:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.640564 systemd-networkd[1481]: lo: Link UP Jan 28 01:23:03.640755 systemd-networkd[1481]: lo: Gained carrier Jan 28 01:23:03.641358 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:23:03.641437 systemd[1]: Reached target network.target - Network. Jan 28 01:23:03.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.646906 kernel: audit: type=1130 audit(1769563383.640:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.693850 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:23:03.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.698178 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:23:03.759493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:03.759636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:03.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.763926 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:03.766830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:03.776873 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 01:23:03.792873 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52345cef (unnamed net_device) (uninitialized): VF slot 1 added Jan 28 01:23:03.835354 systemd-networkd[1481]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:23:03.835360 systemd-networkd[1481]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:23:03.843948 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:23:03.836010 systemd-networkd[1481]: eth0: Link UP Jan 28 01:23:03.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:03.836113 systemd-networkd[1481]: eth0: Gained carrier Jan 28 01:23:03.836121 systemd-networkd[1481]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:23:03.854938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#101 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:23:03.843951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:03.853915 systemd-networkd[1481]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 28 01:23:03.871875 kernel: AES CTR mode by8 optimization enabled Jan 28 01:23:04.045876 kernel: nvme nvme0: using unchecked data buffer Jan 28 01:23:04.136560 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 28 01:23:04.140439 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:23:04.266236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 28 01:23:04.277749 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 28 01:23:04.294031 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 28 01:23:04.408094 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:23:04.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:04.409867 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:23:04.413975 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:23:04.416333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:23:04.423897 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:23:04.513734 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:23:04.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:04.810491 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 28 01:23:04.810712 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 28 01:23:04.813458 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 28 01:23:04.815285 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 01:23:04.819944 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 28 01:23:04.824891 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 28 01:23:04.829968 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 28 01:23:04.832279 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 28 01:23:04.852217 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 01:23:04.852391 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 28 01:23:04.856910 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 28 01:23:04.881384 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 28 01:23:04.890870 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 28 01:23:04.894176 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52345cef eth0: VF registering: eth1 Jan 28 01:23:04.894293 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 28 01:23:04.897746 systemd-networkd[1481]: eth1: Interface name change detected, renamed to enP30832s1. Jan 28 01:23:04.900965 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 28 01:23:05.000872 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 28 01:23:05.004345 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 28 01:23:05.004552 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52345cef eth0: Data path switched to VF: enP30832s1 Jan 28 01:23:05.005462 systemd-networkd[1481]: enP30832s1: Link UP Jan 28 01:23:05.006456 systemd-networkd[1481]: enP30832s1: Gained carrier Jan 28 01:23:05.254006 systemd-networkd[1481]: eth0: Gained IPv6LL Jan 28 01:23:05.515922 disk-uuid[1670]: Warning: The kernel is still using the old partition table. Jan 28 01:23:05.515922 disk-uuid[1670]: The new table will be used at the next reboot or after you Jan 28 01:23:05.515922 disk-uuid[1670]: run partprobe(8) or kpartx(8) Jan 28 01:23:05.515922 disk-uuid[1670]: The operation has completed successfully. Jan 28 01:23:05.527557 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:23:05.527651 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:23:05.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:05.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:05.532765 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:23:05.577286 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1717) Jan 28 01:23:05.577367 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:23:05.578998 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:23:05.598896 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 28 01:23:05.598980 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 28 01:23:05.599873 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 28 01:23:05.605932 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:23:05.606035 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:23:05.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:05.609938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:23:06.558328 ignition[1736]: Ignition 2.24.0 Jan 28 01:23:06.558365 ignition[1736]: Stage: fetch-offline Jan 28 01:23:06.558578 ignition[1736]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:06.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:06.560329 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:23:06.558588 ignition[1736]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:06.565475 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 01:23:06.558669 ignition[1736]: parsed url from cmdline: "" Jan 28 01:23:06.558672 ignition[1736]: no config URL provided Jan 28 01:23:06.558741 ignition[1736]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:23:06.558748 ignition[1736]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:23:06.558753 ignition[1736]: failed to fetch config: resource requires networking Jan 28 01:23:06.559036 ignition[1736]: Ignition finished successfully Jan 28 01:23:06.592693 ignition[1744]: Ignition 2.24.0 Jan 28 01:23:06.592703 ignition[1744]: Stage: fetch Jan 28 01:23:06.592925 ignition[1744]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:06.592932 ignition[1744]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:06.593000 ignition[1744]: parsed url from cmdline: "" Jan 28 01:23:06.593003 ignition[1744]: no config URL provided Jan 28 01:23:06.593007 ignition[1744]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:23:06.593012 ignition[1744]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:23:06.593031 ignition[1744]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 01:23:06.661818 ignition[1744]: GET result: OK Jan 28 01:23:06.661896 ignition[1744]: config has been read from IMDS userdata Jan 28 01:23:06.661926 ignition[1744]: parsing config with SHA512: 23061829da0d26a3dbbf4aadabb763e267a3e6090cfd1497fed032593409f755746b205696f09affc9a78de0f3e83fd338a8590c1851f6769eb98ee4af6cb8b8 Jan 28 01:23:06.667679 unknown[1744]: fetched base config from "system" Jan 28 01:23:06.667698 unknown[1744]: fetched base config from "system" Jan 28 01:23:06.668031 ignition[1744]: fetch: fetch complete Jan 28 01:23:06.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:06.667704 unknown[1744]: fetched user config from "azure" Jan 28 01:23:06.668036 ignition[1744]: fetch: fetch passed Jan 28 01:23:06.671283 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 01:23:06.668073 ignition[1744]: Ignition finished successfully Jan 28 01:23:06.675977 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:23:06.698933 ignition[1750]: Ignition 2.24.0 Jan 28 01:23:06.698944 ignition[1750]: Stage: kargs Jan 28 01:23:06.699129 ignition[1750]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:06.701934 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:23:06.699136 ignition[1750]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:06.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:06.708578 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:23:06.699832 ignition[1750]: kargs: kargs passed Jan 28 01:23:06.699892 ignition[1750]: Ignition finished successfully Jan 28 01:23:06.726018 ignition[1756]: Ignition 2.24.0 Jan 28 01:23:06.726026 ignition[1756]: Stage: disks Jan 28 01:23:06.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:06.727750 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:23:06.726200 ignition[1756]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:06.730759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:23:06.726206 ignition[1756]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:06.732595 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:23:06.726824 ignition[1756]: disks: disks passed Jan 28 01:23:06.738167 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:23:06.726851 ignition[1756]: Ignition finished successfully Jan 28 01:23:06.739566 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:23:06.746208 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:23:06.751378 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:23:06.819372 systemd-fsck[1764]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 28 01:23:06.824051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:23:06.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:06.827937 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:23:07.126872 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 60a46795-cc10-4076-a709-d039d1c23a6b r/w with ordered data mode. Quota mode: none. Jan 28 01:23:07.127609 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:23:07.129498 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:23:07.162303 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:23:07.166936 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:23:07.175974 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 01:23:07.179032 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:23:07.179239 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:23:07.186740 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:23:07.189969 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:23:07.197880 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1773) Jan 28 01:23:07.197911 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:23:07.200373 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:23:07.204424 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 28 01:23:07.204464 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 28 01:23:07.205548 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 28 01:23:07.206776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:23:07.748348 coreos-metadata[1775]: Jan 28 01:23:07.748 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:23:07.751820 coreos-metadata[1775]: Jan 28 01:23:07.751 INFO Fetch successful Jan 28 01:23:07.753930 coreos-metadata[1775]: Jan 28 01:23:07.752 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:23:07.760838 coreos-metadata[1775]: Jan 28 01:23:07.760 INFO Fetch successful Jan 28 01:23:07.789828 coreos-metadata[1775]: Jan 28 01:23:07.789 INFO wrote hostname ci-4593.0.0-n-2270f1152e to /sysroot/etc/hostname Jan 28 01:23:07.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:07.790470 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:23:08.936598 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:23:08.945786 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 28 01:23:08.946932 kernel: audit: type=1130 audit(1769563388.937:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:08.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:08.940947 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:23:08.950974 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:23:08.973232 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:23:08.977530 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:23:08.993012 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:23:08.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:09.001872 kernel: audit: type=1130 audit(1769563388.995:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:09.004833 ignition[1878]: INFO : Ignition 2.24.0 Jan 28 01:23:09.004833 ignition[1878]: INFO : Stage: mount Jan 28 01:23:09.008991 ignition[1878]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:09.008991 ignition[1878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:09.008991 ignition[1878]: INFO : mount: mount passed Jan 28 01:23:09.008991 ignition[1878]: INFO : Ignition finished successfully Jan 28 01:23:09.018170 kernel: audit: type=1130 audit(1769563389.007:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:09.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:09.006823 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:23:09.012144 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:23:09.026648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:23:09.049877 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1888) Jan 28 01:23:09.049907 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 886243c7-f2f0-4861-ae6f-419cdf70e432 Jan 28 01:23:09.051572 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:23:09.056148 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 28 01:23:09.056184 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 28 01:23:09.056243 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 28 01:23:09.058363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:23:09.080153 ignition[1905]: INFO : Ignition 2.24.0 Jan 28 01:23:09.080153 ignition[1905]: INFO : Stage: files Jan 28 01:23:09.083903 ignition[1905]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:09.083903 ignition[1905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:09.083903 ignition[1905]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:23:09.100312 ignition[1905]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:23:09.100312 ignition[1905]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:23:09.186973 ignition[1905]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:23:09.189926 ignition[1905]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:23:09.189926 ignition[1905]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:23:09.187220 unknown[1905]: wrote ssh authorized keys file for user: core Jan 28 01:23:09.202550 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:23:09.205055 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 01:23:09.239797 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:23:09.268657 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:23:09.272927 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:23:09.291888 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:23:09.291888 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:23:09.291888 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:23:09.291888 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:23:09.291888 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:23:09.291888 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 28 01:23:09.803451 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:23:10.402486 ignition[1905]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 01:23:10.402486 ignition[1905]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:23:10.457221 ignition[1905]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:23:10.465412 ignition[1905]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:23:10.465412 ignition[1905]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:23:10.465412 ignition[1905]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:23:10.478166 kernel: audit: type=1130 audit(1769563390.471:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.478243 ignition[1905]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:23:10.478243 ignition[1905]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:23:10.478243 ignition[1905]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:23:10.478243 ignition[1905]: INFO : files: files passed Jan 28 01:23:10.478243 ignition[1905]: INFO : Ignition finished successfully Jan 28 01:23:10.467766 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:23:10.475120 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:23:10.484741 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:23:10.500228 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:23:10.507851 kernel: audit: type=1130 audit(1769563390.501:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.507902 kernel: audit: type=1131 audit(1769563390.501:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.500328 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:23:10.511837 initrd-setup-root-after-ignition[1937]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:10.511837 initrd-setup-root-after-ignition[1937]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:10.515444 initrd-setup-root-after-ignition[1941]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:23:10.518509 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:23:10.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.519711 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:23:10.526062 kernel: audit: type=1130 audit(1769563390.518:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.526522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:23:10.564191 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:23:10.564268 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:23:10.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.568057 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:23:10.576046 kernel: audit: type=1130 audit(1769563390.567:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.576067 kernel: audit: type=1131 audit(1769563390.567:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.575690 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:23:10.578295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:23:10.580168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:23:10.602271 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:23:10.608987 kernel: audit: type=1130 audit(1769563390.602:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.607893 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:23:10.621159 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:23:10.621379 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:23:10.623917 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:23:10.628047 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:23:10.631013 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:23:10.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.631133 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:23:10.635124 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:23:10.638008 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:23:10.640994 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:23:10.644986 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:23:10.647982 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:23:10.650292 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:23:10.653981 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:23:10.657003 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:23:10.661018 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:23:10.668001 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:23:10.670358 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:23:10.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.673971 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:23:10.674100 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:23:10.679951 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:23:10.682963 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:23:10.686952 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:23:10.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.687368 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:23:10.689117 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:23:10.689208 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:23:10.699969 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:23:10.700120 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:23:10.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.704019 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:23:10.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.704117 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:23:10.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.706252 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 01:23:10.706353 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 01:23:10.713101 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:23:10.717171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:23:10.717338 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:23:10.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.721039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:23:10.721239 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:23:10.721511 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:23:10.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.722136 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:23:10.722214 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:23:10.722385 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:23:10.722456 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:23:10.728315 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:23:10.740112 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:23:10.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.753693 ignition[1961]: INFO : Ignition 2.24.0 Jan 28 01:23:10.755087 ignition[1961]: INFO : Stage: umount Jan 28 01:23:10.755087 ignition[1961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:23:10.755087 ignition[1961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 01:23:10.761914 ignition[1961]: INFO : umount: umount passed Jan 28 01:23:10.761914 ignition[1961]: INFO : Ignition finished successfully Jan 28 01:23:10.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.756938 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:23:10.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.757024 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:23:10.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.763845 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:23:10.763890 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:23:10.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.767986 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:23:10.768025 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:23:10.771132 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 01:23:10.771194 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 01:23:10.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.775011 systemd[1]: Stopped target network.target - Network. Jan 28 01:23:10.776100 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:23:10.777110 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:23:10.785954 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:23:10.788899 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:23:10.789248 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:23:10.791405 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:23:10.795905 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:23:10.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.799228 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:23:10.799262 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:23:10.805946 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:23:10.805976 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:23:10.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.806111 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 01:23:10.825000 audit: BPF prog-id=6 op=UNLOAD Jan 28 01:23:10.806128 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:23:10.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.806309 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:23:10.832000 audit: BPF prog-id=9 op=UNLOAD Jan 28 01:23:10.806347 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:23:10.810924 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:23:10.810954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:23:10.812960 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:23:10.815973 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:23:10.819840 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:23:10.822341 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:23:10.822416 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:23:10.828305 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:23:10.828383 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:23:10.833311 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:23:10.846781 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:23:10.846818 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:23:10.849783 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:23:10.856939 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:23:10.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.857001 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:23:10.859143 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:23:10.859197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:23:10.859400 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:23:10.859430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:23:10.859750 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:23:10.882458 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:23:10.891876 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52345cef eth0: Data path switched from VF: enP30832s1 Jan 28 01:23:10.892439 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 28 01:23:10.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.885089 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:23:10.889535 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:23:10.889587 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:23:10.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.895010 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:23:10.895415 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:23:10.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.899022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:23:10.899073 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:23:10.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.932978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:23:10.933174 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:23:10.938746 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:23:10.938791 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:23:10.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.949636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:23:10.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.956907 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:23:10.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.956973 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:23:10.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:10.963962 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:23:10.964011 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:23:10.971752 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:10.971804 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:10.974581 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:23:10.974644 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:23:10.982094 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:23:10.982152 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:23:11.222472 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:23:11.222554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:23:11.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:11.224799 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:23:11.227908 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:23:11.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:11.227958 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:23:11.231981 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:23:11.270475 systemd[1]: Switching root. Jan 28 01:23:11.334557 systemd-journald[1111]: Journal stopped Jan 28 01:23:18.968609 systemd-journald[1111]: Received SIGTERM from PID 1 (systemd). Jan 28 01:23:18.968640 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:23:18.968655 kernel: SELinux: policy capability open_perms=1 Jan 28 01:23:18.968664 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:23:18.968672 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:23:18.968680 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:23:18.968689 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:23:18.968697 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:23:18.968708 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:23:18.968716 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 01:23:18.968743 systemd[1]: Successfully loaded SELinux policy in 196.831ms. Jan 28 01:23:18.968755 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.336ms. Jan 28 01:23:18.968765 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:23:18.968776 systemd[1]: Detected virtualization microsoft. Jan 28 01:23:18.968785 systemd[1]: Detected architecture x86-64. Jan 28 01:23:18.968794 systemd[1]: Detected first boot. Jan 28 01:23:18.968804 systemd[1]: Hostname set to . Jan 28 01:23:18.968816 systemd[1]: Initializing machine ID from random generator. Jan 28 01:23:18.968826 zram_generator::config[2004]: No configuration found. Jan 28 01:23:18.968836 kernel: Guest personality initialized and is inactive Jan 28 01:23:18.968845 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 28 01:23:18.968853 kernel: Initialized host personality Jan 28 01:23:18.968878 kernel: NET: Registered PF_VSOCK protocol family Jan 28 01:23:18.968887 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:23:18.968897 kernel: kauditd_printk_skb: 44 callbacks suppressed Jan 28 01:23:18.968907 kernel: audit: type=1334 audit(1769563398.286:89): prog-id=12 op=LOAD Jan 28 01:23:18.968915 kernel: audit: type=1334 audit(1769563398.286:90): prog-id=3 op=UNLOAD Jan 28 01:23:18.968924 kernel: audit: type=1334 audit(1769563398.286:91): prog-id=13 op=LOAD Jan 28 01:23:18.968932 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:23:18.968941 kernel: audit: type=1334 audit(1769563398.286:92): prog-id=14 op=LOAD Jan 28 01:23:18.968953 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:23:18.968963 kernel: audit: type=1334 audit(1769563398.286:93): prog-id=4 op=UNLOAD Jan 28 01:23:18.968973 kernel: audit: type=1334 audit(1769563398.286:94): prog-id=5 op=UNLOAD Jan 28 01:23:18.968982 kernel: audit: type=1131 audit(1769563398.287:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:18.968991 kernel: audit: type=1334 audit(1769563398.295:96): prog-id=12 op=UNLOAD Jan 28 01:23:18.969001 kernel: audit: type=1130 audit(1769563398.302:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:18.969013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:23:18.969023 kernel: audit: type=1131 audit(1769563398.302:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:18.969036 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:23:18.969046 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:23:18.969060 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:23:18.969070 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:23:18.969082 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:23:18.969093 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:23:18.969104 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:23:18.969113 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:23:18.969124 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:23:18.969135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:23:18.969148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:23:18.969158 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:23:18.969169 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:23:18.969179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:23:18.969189 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:23:18.969198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:23:18.969210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:23:18.969221 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:23:18.969231 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:23:18.969241 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:23:18.969251 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:23:18.969261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:23:18.969272 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:23:18.969283 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 01:23:18.969294 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:23:18.969303 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:23:18.969313 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:23:18.969324 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:23:18.969336 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 01:23:18.969345 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 01:23:18.969356 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 01:23:18.969366 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:23:18.969377 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 01:23:18.969388 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 01:23:18.969399 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:23:18.969409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:23:18.969419 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:23:18.969429 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:23:18.969440 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:23:18.969450 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:23:18.969462 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:23:18.969471 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:23:18.969482 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:23:18.969491 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:23:18.969503 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:23:18.969512 systemd[1]: Reached target machines.target - Containers. Jan 28 01:23:18.969522 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:23:18.969534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:23:18.969544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:23:18.969553 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:23:18.969562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:23:18.969571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:23:18.969580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:23:18.969590 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:23:18.969599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:23:18.969609 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:23:18.969618 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:23:18.969627 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:23:18.969636 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:23:18.969644 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:23:18.969655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:23:18.969663 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:23:18.969692 systemd-journald[2083]: Collecting audit messages is enabled. Jan 28 01:23:18.969718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:23:18.969729 systemd-journald[2083]: Journal started Jan 28 01:23:18.969750 systemd-journald[2083]: Runtime Journal (/run/log/journal/17a24d28e2eb4f58a40af6a61da5c6ee) is 8M, max 158.5M, 150.5M free. Jan 28 01:23:18.413000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 01:23:18.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:18.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:18.748000 audit: BPF prog-id=14 op=UNLOAD Jan 28 01:23:18.748000 audit: BPF prog-id=13 op=UNLOAD Jan 28 01:23:18.749000 audit: BPF prog-id=15 op=LOAD Jan 28 01:23:18.749000 audit: BPF prog-id=16 op=LOAD Jan 28 01:23:18.749000 audit: BPF prog-id=17 op=LOAD Jan 28 01:23:18.965000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 01:23:18.965000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc64c41000 a2=4000 a3=0 items=0 ppid=1 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:18.965000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 01:23:18.279894 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:23:18.287745 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 28 01:23:18.288096 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:23:18.978881 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:23:18.985921 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:23:18.996873 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 01:23:19.004085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:23:19.012917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:23:19.012971 kernel: fuse: init (API version 7.41) Jan 28 01:23:19.012996 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:23:19.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.017939 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:23:19.021365 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:23:19.022535 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:23:19.023652 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:23:19.024830 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:23:19.026018 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:23:19.028107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:23:19.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.031073 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:23:19.031205 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:23:19.033054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:23:19.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.033205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:23:19.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.035446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:23:19.035621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:23:19.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.039193 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:23:19.039306 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:23:19.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.040741 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:23:19.040883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:23:19.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.044287 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:23:19.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.045758 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:23:19.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.049573 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:23:19.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.055361 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:23:19.057392 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 01:23:19.060567 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:23:19.064950 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:23:19.066895 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:23:19.066923 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:23:19.069339 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 01:23:19.142498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:23:19.142603 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:23:19.238979 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:23:19.247034 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:23:19.249976 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:23:19.251113 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:23:19.253970 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:23:19.256836 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:23:19.272083 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:23:19.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.278595 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 01:23:19.281127 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:23:19.284028 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:23:19.297137 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:23:19.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.394566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:23:19.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.400824 systemd-journald[2083]: Time spent on flushing to /var/log/journal/17a24d28e2eb4f58a40af6a61da5c6ee is 10.288ms for 1122 entries. Jan 28 01:23:19.400824 systemd-journald[2083]: System Journal (/var/log/journal/17a24d28e2eb4f58a40af6a61da5c6ee) is 8M, max 2.2G, 2.2G free. Jan 28 01:23:21.673709 systemd-journald[2083]: Received client request to flush runtime journal. Jan 28 01:23:21.673783 kernel: ACPI: bus type drm_connector registered Jan 28 01:23:21.673811 kernel: loop1: detected capacity change from 0 to 50784 Jan 28 01:23:21.673832 kernel: loop2: detected capacity change from 0 to 25512 Jan 28 01:23:19.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:19.739350 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:23:19.739491 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:23:19.822116 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:23:19.824098 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:23:19.826890 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 01:23:21.674937 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:23:21.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:22.432974 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:23:22.434184 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 01:23:22.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:22.924798 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:23:22.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:22.927529 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:23:23.047145 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:23:23.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:23.095876 kernel: loop3: detected capacity change from 0 to 111560 Jan 28 01:23:23.475298 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:23:23.486226 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 28 01:23:23.487216 kernel: audit: type=1130 audit(1769563403.477:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:23.487248 kernel: audit: type=1334 audit(1769563403.478:134): prog-id=18 op=LOAD Jan 28 01:23:23.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:23.478000 audit: BPF prog-id=18 op=LOAD Jan 28 01:23:23.483002 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 01:23:23.478000 audit: BPF prog-id=19 op=LOAD Jan 28 01:23:23.489513 kernel: audit: type=1334 audit(1769563403.478:135): prog-id=19 op=LOAD Jan 28 01:23:23.489605 kernel: audit: type=1334 audit(1769563403.478:136): prog-id=20 op=LOAD Jan 28 01:23:23.478000 audit: BPF prog-id=20 op=LOAD Jan 28 01:23:23.490000 audit: BPF prog-id=21 op=LOAD Jan 28 01:23:23.492895 kernel: audit: type=1334 audit(1769563403.490:137): prog-id=21 op=LOAD Jan 28 01:23:23.493997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:23:23.497680 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:23:23.500000 audit: BPF prog-id=22 op=LOAD Jan 28 01:23:23.503052 kernel: audit: type=1334 audit(1769563403.500:138): prog-id=22 op=LOAD Jan 28 01:23:23.503000 audit: BPF prog-id=23 op=LOAD Jan 28 01:23:23.503000 audit: BPF prog-id=24 op=LOAD Jan 28 01:23:23.507010 kernel: audit: type=1334 audit(1769563403.503:139): prog-id=23 op=LOAD Jan 28 01:23:23.507038 kernel: audit: type=1334 audit(1769563403.503:140): prog-id=24 op=LOAD Jan 28 01:23:23.508979 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 01:23:23.514952 kernel: audit: type=1334 audit(1769563403.511:141): prog-id=25 op=LOAD Jan 28 01:23:23.515003 kernel: audit: type=1334 audit(1769563403.511:142): prog-id=26 op=LOAD Jan 28 01:23:23.511000 audit: BPF prog-id=25 op=LOAD Jan 28 01:23:23.511000 audit: BPF prog-id=26 op=LOAD Jan 28 01:23:23.511000 audit: BPF prog-id=27 op=LOAD Jan 28 01:23:23.515995 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:23:23.741219 systemd-nsresourced[2170]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 01:23:23.743333 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 01:23:23.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:23.804847 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:23:23.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:23.882729 systemd-tmpfiles[2169]: ACLs are not supported, ignoring. Jan 28 01:23:23.882745 systemd-tmpfiles[2169]: ACLs are not supported, ignoring. Jan 28 01:23:23.885159 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:23:23.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:23.887000 audit: BPF prog-id=8 op=UNLOAD Jan 28 01:23:23.887000 audit: BPF prog-id=7 op=UNLOAD Jan 28 01:23:23.888000 audit: BPF prog-id=28 op=LOAD Jan 28 01:23:23.888000 audit: BPF prog-id=29 op=LOAD Jan 28 01:23:23.889464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:23:23.899120 systemd-oomd[2167]: No swap; memory pressure usage will be degraded Jan 28 01:23:23.899901 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 01:23:23.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:23.913765 systemd-udevd[2188]: Using default interface naming scheme 'v257'. Jan 28 01:23:23.950952 systemd-resolved[2168]: Positive Trust Anchors: Jan 28 01:23:23.950966 systemd-resolved[2168]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:23:23.950969 systemd-resolved[2168]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 01:23:23.950998 systemd-resolved[2168]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:23:24.686260 systemd-resolved[2168]: Using system hostname 'ci-4593.0.0-n-2270f1152e'. Jan 28 01:23:24.687324 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:23:24.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:24.690016 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:23:24.836894 kernel: loop4: detected capacity change from 0 to 219144 Jan 28 01:23:24.868879 kernel: loop5: detected capacity change from 0 to 50784 Jan 28 01:23:24.873590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:23:24.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:24.876000 audit: BPF prog-id=30 op=LOAD Jan 28 01:23:24.877973 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:23:24.952993 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:23:25.097876 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:23:25.143939 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 01:23:25.147098 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 01:23:25.147147 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 01:23:25.147890 kernel: Console: switching to colour dummy device 80x25 Jan 28 01:23:25.150892 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 01:23:25.222884 kernel: hv_vmbus: registering driver hv_balloon Jan 28 01:23:25.223829 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 01:23:25.294913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 01:23:25.405949 systemd-networkd[2200]: lo: Link UP Jan 28 01:23:25.405957 systemd-networkd[2200]: lo: Gained carrier Jan 28 01:23:25.408509 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:23:25.408739 systemd-networkd[2200]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:23:25.408748 systemd-networkd[2200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:23:25.410979 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 28 01:23:25.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:25.413038 systemd[1]: Reached target network.target - Network. Jan 28 01:23:25.415915 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 28 01:23:25.424946 kernel: hv_netvsc f8615163-0000-1000-2000-7c1e52345cef eth0: Data path switched to VF: enP30832s1 Jan 28 01:23:25.417988 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 01:23:25.420285 systemd-networkd[2200]: enP30832s1: Link UP Jan 28 01:23:25.420366 systemd-networkd[2200]: eth0: Link UP Jan 28 01:23:25.420368 systemd-networkd[2200]: eth0: Gained carrier Jan 28 01:23:25.420380 systemd-networkd[2200]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:23:25.422031 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:23:25.423157 systemd-networkd[2200]: enP30832s1: Gained carrier Jan 28 01:23:25.428901 systemd-networkd[2200]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 28 01:23:25.481058 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:25.491965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:25.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:25.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:25.492889 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:25.496178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:25.510531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:23:25.510725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:25.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:25.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:25.515667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:23:25.702204 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 01:23:25.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:25.981888 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 28 01:23:25.986881 kernel: loop6: detected capacity change from 0 to 25512 Jan 28 01:23:26.338271 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 28 01:23:26.340603 kernel: loop7: detected capacity change from 0 to 111560 Jan 28 01:23:26.341962 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:23:26.549252 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:23:26.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:26.735021 kernel: loop1: detected capacity change from 0 to 219144 Jan 28 01:23:26.750728 (sd-merge)[2192]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 28 01:23:26.753424 (sd-merge)[2192]: Merged extensions into '/usr'. Jan 28 01:23:26.757581 systemd[1]: Reload requested from client PID 2121 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:23:26.757601 systemd[1]: Reloading... Jan 28 01:23:26.805878 zram_generator::config[2309]: No configuration found. Jan 28 01:23:27.141605 systemd[1]: Reloading finished in 383 ms. Jan 28 01:23:27.172809 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:23:27.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.179669 systemd[1]: Starting ensure-sysext.service... Jan 28 01:23:27.185000 audit: BPF prog-id=31 op=LOAD Jan 28 01:23:27.185000 audit: BPF prog-id=22 op=UNLOAD Jan 28 01:23:27.185000 audit: BPF prog-id=32 op=LOAD Jan 28 01:23:27.185000 audit: BPF prog-id=33 op=LOAD Jan 28 01:23:27.185000 audit: BPF prog-id=23 op=UNLOAD Jan 28 01:23:27.185000 audit: BPF prog-id=24 op=UNLOAD Jan 28 01:23:27.186000 audit: BPF prog-id=34 op=LOAD Jan 28 01:23:27.186000 audit: BPF prog-id=25 op=UNLOAD Jan 28 01:23:27.186000 audit: BPF prog-id=35 op=LOAD Jan 28 01:23:27.186000 audit: BPF prog-id=36 op=LOAD Jan 28 01:23:27.186000 audit: BPF prog-id=26 op=UNLOAD Jan 28 01:23:27.186000 audit: BPF prog-id=27 op=UNLOAD Jan 28 01:23:27.187000 audit: BPF prog-id=37 op=LOAD Jan 28 01:23:27.187000 audit: BPF prog-id=15 op=UNLOAD Jan 28 01:23:27.187000 audit: BPF prog-id=38 op=LOAD Jan 28 01:23:27.187000 audit: BPF prog-id=39 op=LOAD Jan 28 01:23:27.187000 audit: BPF prog-id=16 op=UNLOAD Jan 28 01:23:27.187000 audit: BPF prog-id=17 op=UNLOAD Jan 28 01:23:27.188000 audit: BPF prog-id=40 op=LOAD Jan 28 01:23:27.188000 audit: BPF prog-id=30 op=UNLOAD Jan 28 01:23:27.188000 audit: BPF prog-id=41 op=LOAD Jan 28 01:23:27.188000 audit: BPF prog-id=42 op=LOAD Jan 28 01:23:27.188000 audit: BPF prog-id=28 op=UNLOAD Jan 28 01:23:27.188000 audit: BPF prog-id=29 op=UNLOAD Jan 28 01:23:27.183936 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:23:27.192000 audit: BPF prog-id=43 op=LOAD Jan 28 01:23:27.192000 audit: BPF prog-id=18 op=UNLOAD Jan 28 01:23:27.192000 audit: BPF prog-id=44 op=LOAD Jan 28 01:23:27.192000 audit: BPF prog-id=45 op=LOAD Jan 28 01:23:27.192000 audit: BPF prog-id=19 op=UNLOAD Jan 28 01:23:27.192000 audit: BPF prog-id=20 op=UNLOAD Jan 28 01:23:27.193000 audit: BPF prog-id=46 op=LOAD Jan 28 01:23:27.193000 audit: BPF prog-id=21 op=UNLOAD Jan 28 01:23:27.201989 systemd[1]: Reload requested from client PID 2368 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:23:27.202256 systemd[1]: Reloading... Jan 28 01:23:27.209457 systemd-tmpfiles[2369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:23:27.209489 systemd-tmpfiles[2369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:23:27.209675 systemd-tmpfiles[2369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:23:27.210675 systemd-tmpfiles[2369]: ACLs are not supported, ignoring. Jan 28 01:23:27.210732 systemd-tmpfiles[2369]: ACLs are not supported, ignoring. Jan 28 01:23:27.265883 zram_generator::config[2405]: No configuration found. Jan 28 01:23:27.269993 systemd-networkd[2200]: eth0: Gained IPv6LL Jan 28 01:23:27.440108 systemd[1]: Reloading finished in 237 ms. Jan 28 01:23:27.457586 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:23:27.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.458000 audit: BPF prog-id=47 op=LOAD Jan 28 01:23:27.458000 audit: BPF prog-id=46 op=UNLOAD Jan 28 01:23:27.459000 audit: BPF prog-id=48 op=LOAD Jan 28 01:23:27.459000 audit: BPF prog-id=31 op=UNLOAD Jan 28 01:23:27.459000 audit: BPF prog-id=49 op=LOAD Jan 28 01:23:27.459000 audit: BPF prog-id=50 op=LOAD Jan 28 01:23:27.459000 audit: BPF prog-id=32 op=UNLOAD Jan 28 01:23:27.459000 audit: BPF prog-id=33 op=UNLOAD Jan 28 01:23:27.460000 audit: BPF prog-id=51 op=LOAD Jan 28 01:23:27.460000 audit: BPF prog-id=40 op=UNLOAD Jan 28 01:23:27.461420 systemd-tmpfiles[2369]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:23:27.461425 systemd-tmpfiles[2369]: Skipping /boot Jan 28 01:23:27.461000 audit: BPF prog-id=52 op=LOAD Jan 28 01:23:27.461000 audit: BPF prog-id=53 op=LOAD Jan 28 01:23:27.461000 audit: BPF prog-id=41 op=UNLOAD Jan 28 01:23:27.461000 audit: BPF prog-id=42 op=UNLOAD Jan 28 01:23:27.462000 audit: BPF prog-id=54 op=LOAD Jan 28 01:23:27.462000 audit: BPF prog-id=43 op=UNLOAD Jan 28 01:23:27.462000 audit: BPF prog-id=55 op=LOAD Jan 28 01:23:27.462000 audit: BPF prog-id=56 op=LOAD Jan 28 01:23:27.462000 audit: BPF prog-id=44 op=UNLOAD Jan 28 01:23:27.462000 audit: BPF prog-id=45 op=UNLOAD Jan 28 01:23:27.463000 audit: BPF prog-id=57 op=LOAD Jan 28 01:23:27.463000 audit: BPF prog-id=37 op=UNLOAD Jan 28 01:23:27.463000 audit: BPF prog-id=58 op=LOAD Jan 28 01:23:27.463000 audit: BPF prog-id=59 op=LOAD Jan 28 01:23:27.463000 audit: BPF prog-id=38 op=UNLOAD Jan 28 01:23:27.463000 audit: BPF prog-id=39 op=UNLOAD Jan 28 01:23:27.464000 audit: BPF prog-id=60 op=LOAD Jan 28 01:23:27.464000 audit: BPF prog-id=34 op=UNLOAD Jan 28 01:23:27.464000 audit: BPF prog-id=61 op=LOAD Jan 28 01:23:27.464000 audit: BPF prog-id=62 op=LOAD Jan 28 01:23:27.464000 audit: BPF prog-id=35 op=UNLOAD Jan 28 01:23:27.464000 audit: BPF prog-id=36 op=UNLOAD Jan 28 01:23:27.471190 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:23:27.471539 systemd-tmpfiles[2369]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:23:27.471552 systemd-tmpfiles[2369]: Skipping /boot Jan 28 01:23:27.483046 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:23:27.483203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:23:27.484401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:23:27.488100 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:23:27.492825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:23:27.493176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:23:27.493349 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:23:27.493449 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:23:27.493549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:23:27.494756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:23:27.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.500421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:23:27.500597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:23:27.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.501170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:23:27.501321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:23:27.503825 systemd[1]: Finished ensure-sysext.service. Jan 28 01:23:27.506390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:23:27.507277 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:23:27.747259 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:23:27.747478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:23:27.749386 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:23:27.749547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:23:27.749618 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 01:23:27.753118 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:23:27.753194 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:23:27.756835 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:23:27.756940 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:23:27.756980 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:23:27.766428 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:23:27.766532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:23:27.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.768186 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:23:27.768386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:23:27.768666 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:23:27.768828 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:23:27.769839 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:23:27.781000 audit[2478]: SYSTEM_BOOT pid=2478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.787924 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:23:27.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:27.846058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:23:27.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:28.620877 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:23:28.623879 kernel: kauditd_printk_skb: 98 callbacks suppressed Jan 28 01:23:28.623925 kernel: audit: type=1130 audit(1769563408.621:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:28.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:23:29.355000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:23:29.358011 augenrules[2502]: No rules Jan 28 01:23:29.360853 kernel: audit: type=1305 audit(1769563409.355:242): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 01:23:29.360930 kernel: audit: type=1300 audit(1769563409.355:242): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3ad11de0 a2=420 a3=0 items=0 ppid=2472 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:29.355000 audit[2502]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3ad11de0 a2=420 a3=0 items=0 ppid=2472 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:23:29.355000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:23:29.361314 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:23:29.362188 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:23:29.362871 kernel: audit: type=1327 audit(1769563409.355:242): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:23:30.693905 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:23:30.695626 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:23:34.643384 ldconfig[2475]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:23:34.659665 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:23:34.661944 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:23:34.681094 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:23:34.684081 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:23:34.687014 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:23:34.688436 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:23:34.697912 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 01:23:34.699416 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:23:34.701940 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:23:34.704914 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 01:23:34.707939 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 01:23:34.710893 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:23:34.712179 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:23:34.712206 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:23:34.714899 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:23:34.717585 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:23:34.719736 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:23:34.723271 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 01:23:34.726032 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 01:23:34.728925 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 01:23:34.731185 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:23:34.732555 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 01:23:34.734267 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:23:34.736098 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:23:34.737571 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:23:34.739936 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:23:34.739960 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:23:34.754929 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 01:23:34.756785 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:23:34.763013 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 01:23:34.769980 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:23:34.773028 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:23:34.780075 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:23:34.784702 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:23:34.786260 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:23:34.788032 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 01:23:34.790978 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 28 01:23:34.793009 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 01:23:34.794684 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 01:23:34.798902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:23:34.803014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:23:34.805954 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:23:34.812503 KVP[2525]: KVP starting; pid is:2525 Jan 28 01:23:34.813049 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:23:34.815073 jq[2520]: false Jan 28 01:23:34.818888 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:23:34.820201 KVP[2525]: KVP LIC Version: 3.1 Jan 28 01:23:34.820947 kernel: hv_utils: KVP IC version 4.0 Jan 28 01:23:34.824105 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:23:34.828758 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:23:34.831970 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:23:34.836088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:23:34.837060 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:23:34.840778 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:23:34.849160 extend-filesystems[2523]: Found /dev/nvme0n1p6 Jan 28 01:23:34.853111 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:23:34.855157 google_oslogin_nss_cache[2524]: oslogin_cache_refresh[2524]: Refreshing passwd entry cache Jan 28 01:23:34.855338 oslogin_cache_refresh[2524]: Refreshing passwd entry cache Jan 28 01:23:34.857369 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:23:34.857913 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:23:34.858847 chronyd[2514]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 28 01:23:34.862794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:23:34.863073 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:23:34.870750 extend-filesystems[2523]: Found /dev/nvme0n1p9 Jan 28 01:23:34.873950 jq[2539]: true Jan 28 01:23:34.880136 chronyd[2514]: Timezone right/UTC failed leap second check, ignoring Jan 28 01:23:34.880263 chronyd[2514]: Loaded seccomp filter (level 2) Jan 28 01:23:34.883604 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 01:23:34.887808 extend-filesystems[2523]: Checking size of /dev/nvme0n1p9 Jan 28 01:23:34.885429 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:23:34.892006 jq[2553]: true Jan 28 01:23:34.885625 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:23:34.905380 google_oslogin_nss_cache[2524]: oslogin_cache_refresh[2524]: Failure getting users, quitting Jan 28 01:23:34.905380 google_oslogin_nss_cache[2524]: oslogin_cache_refresh[2524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:23:34.905380 google_oslogin_nss_cache[2524]: oslogin_cache_refresh[2524]: Refreshing group entry cache Jan 28 01:23:34.904690 oslogin_cache_refresh[2524]: Failure getting users, quitting Jan 28 01:23:34.904704 oslogin_cache_refresh[2524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:23:34.904740 oslogin_cache_refresh[2524]: Refreshing group entry cache Jan 28 01:23:34.913854 update_engine[2538]: I20260128 01:23:34.913776 2538 main.cc:92] Flatcar Update Engine starting Jan 28 01:23:34.921782 google_oslogin_nss_cache[2524]: oslogin_cache_refresh[2524]: Failure getting groups, quitting Jan 28 01:23:34.921782 google_oslogin_nss_cache[2524]: oslogin_cache_refresh[2524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:23:34.921600 oslogin_cache_refresh[2524]: Failure getting groups, quitting Jan 28 01:23:34.921609 oslogin_cache_refresh[2524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:23:34.926538 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 01:23:34.927475 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 01:23:34.934039 extend-filesystems[2523]: Resized partition /dev/nvme0n1p9 Jan 28 01:23:34.937593 extend-filesystems[2589]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 01:23:34.948918 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 28 01:23:34.954968 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 28 01:23:34.955530 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:23:35.004945 tar[2544]: linux-amd64/LICENSE Jan 28 01:23:35.006840 tar[2544]: linux-amd64/helm Jan 28 01:23:35.022688 extend-filesystems[2589]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 28 01:23:35.022688 extend-filesystems[2589]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 28 01:23:35.022688 extend-filesystems[2589]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 28 01:23:35.040961 extend-filesystems[2523]: Resized filesystem in /dev/nvme0n1p9 Jan 28 01:23:35.026134 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:23:35.026329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:23:35.036447 systemd-logind[2535]: New seat seat0. Jan 28 01:23:35.043441 systemd-logind[2535]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 28 01:23:35.043625 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:23:35.059842 bash[2600]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:23:35.060517 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:23:35.074010 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:23:35.142096 dbus-daemon[2517]: [system] SELinux support is enabled Jan 28 01:23:35.142421 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:23:35.149337 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:23:35.152174 dbus-daemon[2517]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 01:23:35.149363 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:23:35.151846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:23:35.151890 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:23:35.154269 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:23:35.172918 update_engine[2538]: I20260128 01:23:35.172870 2538 update_check_scheduler.cc:74] Next update check in 3m13s Jan 28 01:23:35.197774 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:23:35.231476 sshd_keygen[2574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:23:35.253030 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:23:35.261429 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:23:35.262012 coreos-metadata[2516]: Jan 28 01:23:35.261 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 01:23:35.266044 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 01:23:35.267406 coreos-metadata[2516]: Jan 28 01:23:35.267 INFO Fetch successful Jan 28 01:23:35.270013 coreos-metadata[2516]: Jan 28 01:23:35.267 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 01:23:35.281531 coreos-metadata[2516]: Jan 28 01:23:35.280 INFO Fetch successful Jan 28 01:23:35.281531 coreos-metadata[2516]: Jan 28 01:23:35.280 INFO Fetching http://168.63.129.16/machine/1b70522e-f057-48f6-99fd-be2dc60b127d/b325660c%2D8847%2D4020%2D87a1%2D4f0bc74de37d.%5Fci%2D4593.0.0%2Dn%2D2270f1152e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 01:23:35.283741 coreos-metadata[2516]: Jan 28 01:23:35.283 INFO Fetch successful Jan 28 01:23:35.283741 coreos-metadata[2516]: Jan 28 01:23:35.283 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 01:23:35.293128 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:23:35.293893 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:23:35.304986 coreos-metadata[2516]: Jan 28 01:23:35.304 INFO Fetch successful Jan 28 01:23:35.311934 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:23:35.322157 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 01:23:35.341830 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 01:23:35.344640 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:23:35.347872 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:23:35.353289 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:23:35.356083 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:23:35.357979 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:23:35.493064 locksmithd[2631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:23:35.562231 tar[2544]: linux-amd64/README.md Jan 28 01:23:35.576930 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:23:36.114301 containerd[2559]: time="2026-01-28T01:23:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 01:23:36.115578 containerd[2559]: time="2026-01-28T01:23:36.115455116Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 01:23:36.128355 containerd[2559]: time="2026-01-28T01:23:36.128322523Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.95µs" Jan 28 01:23:36.128442 containerd[2559]: time="2026-01-28T01:23:36.128429105Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 01:23:36.128504 containerd[2559]: time="2026-01-28T01:23:36.128494761Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 01:23:36.128557 containerd[2559]: time="2026-01-28T01:23:36.128547756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 01:23:36.128710 containerd[2559]: time="2026-01-28T01:23:36.128697015Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 01:23:36.128752 containerd[2559]: time="2026-01-28T01:23:36.128743339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:23:36.128828 containerd[2559]: time="2026-01-28T01:23:36.128817183Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:23:36.128879 containerd[2559]: time="2026-01-28T01:23:36.128869815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129122 containerd[2559]: time="2026-01-28T01:23:36.129107899Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129161 containerd[2559]: time="2026-01-28T01:23:36.129153438Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129198 containerd[2559]: time="2026-01-28T01:23:36.129188941Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129229 containerd[2559]: time="2026-01-28T01:23:36.129222529Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129384 containerd[2559]: time="2026-01-28T01:23:36.129373862Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129423 containerd[2559]: time="2026-01-28T01:23:36.129415660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129524 containerd[2559]: time="2026-01-28T01:23:36.129513757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129727 containerd[2559]: time="2026-01-28T01:23:36.129710747Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129793 containerd[2559]: time="2026-01-28T01:23:36.129781016Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:23:36.129831 containerd[2559]: time="2026-01-28T01:23:36.129822829Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 01:23:36.129928 containerd[2559]: time="2026-01-28T01:23:36.129917412Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 01:23:36.130282 containerd[2559]: time="2026-01-28T01:23:36.130268644Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 01:23:36.130418 containerd[2559]: time="2026-01-28T01:23:36.130400541Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:23:36.145164 containerd[2559]: time="2026-01-28T01:23:36.145125952Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 01:23:36.145772 containerd[2559]: time="2026-01-28T01:23:36.145263313Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:23:36.146112 containerd[2559]: time="2026-01-28T01:23:36.146079416Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 01:23:36.146176 containerd[2559]: time="2026-01-28T01:23:36.146162432Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146239219Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146256586Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146269028Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146279493Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146291721Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146392022Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146409515Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146431229Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146444242Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 01:23:36.146493 containerd[2559]: time="2026-01-28T01:23:36.146458549Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 01:23:36.146723 containerd[2559]: time="2026-01-28T01:23:36.146567861Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 01:23:36.146723 containerd[2559]: time="2026-01-28T01:23:36.146586275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 01:23:36.146723 containerd[2559]: time="2026-01-28T01:23:36.146600615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 01:23:36.148225 containerd[2559]: time="2026-01-28T01:23:36.146611535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148234955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148251853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148281017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148299049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148311850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148322687Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148334594Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148373987Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148445910Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148460013Z" level=info msg="Start snapshots syncer" Jan 28 01:23:36.148604 containerd[2559]: time="2026-01-28T01:23:36.148481234Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 01:23:36.148840 containerd[2559]: time="2026-01-28T01:23:36.148774942Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 01:23:36.148840 containerd[2559]: time="2026-01-28T01:23:36.148825797Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 01:23:36.148987 containerd[2559]: time="2026-01-28T01:23:36.148884383Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 01:23:36.148987 containerd[2559]: time="2026-01-28T01:23:36.148976580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 01:23:36.149031 containerd[2559]: time="2026-01-28T01:23:36.148995340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 01:23:36.149031 containerd[2559]: time="2026-01-28T01:23:36.149005780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 01:23:36.149031 containerd[2559]: time="2026-01-28T01:23:36.149016370Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 01:23:36.149094 containerd[2559]: time="2026-01-28T01:23:36.149034766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 01:23:36.149094 containerd[2559]: time="2026-01-28T01:23:36.149047009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 01:23:36.149094 containerd[2559]: time="2026-01-28T01:23:36.149057629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 01:23:36.149094 containerd[2559]: time="2026-01-28T01:23:36.149068735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 01:23:36.149094 containerd[2559]: time="2026-01-28T01:23:36.149082975Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 01:23:36.149192 containerd[2559]: time="2026-01-28T01:23:36.149106001Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:23:36.149192 containerd[2559]: time="2026-01-28T01:23:36.149121317Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:23:36.149192 containerd[2559]: time="2026-01-28T01:23:36.149131921Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:23:36.149192 containerd[2559]: time="2026-01-28T01:23:36.149141612Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:23:36.149192 containerd[2559]: time="2026-01-28T01:23:36.149155855Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 01:23:36.149192 containerd[2559]: time="2026-01-28T01:23:36.149165832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 01:23:36.149192 containerd[2559]: time="2026-01-28T01:23:36.149185387Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 01:23:36.149322 containerd[2559]: time="2026-01-28T01:23:36.149197867Z" level=info msg="runtime interface created" Jan 28 01:23:36.149322 containerd[2559]: time="2026-01-28T01:23:36.149203917Z" level=info msg="created NRI interface" Jan 28 01:23:36.149322 containerd[2559]: time="2026-01-28T01:23:36.149212007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 01:23:36.149322 containerd[2559]: time="2026-01-28T01:23:36.149223357Z" level=info msg="Connect containerd service" Jan 28 01:23:36.149322 containerd[2559]: time="2026-01-28T01:23:36.149242272Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:23:36.151010 containerd[2559]: time="2026-01-28T01:23:36.150871571Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:23:36.291995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:23:36.314157 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:23:36.470990 waagent[2655]: 2026-01-28T01:23:36.469536Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 28 01:23:36.471440 waagent[2655]: 2026-01-28T01:23:36.471395Z INFO Daemon Daemon OS: flatcar 4593.0.0 Jan 28 01:23:36.472906 waagent[2655]: 2026-01-28T01:23:36.472848Z INFO Daemon Daemon Python: 3.12.11 Jan 28 01:23:36.476169 waagent[2655]: 2026-01-28T01:23:36.476128Z INFO Daemon Daemon Run daemon Jan 28 01:23:36.477481 waagent[2655]: 2026-01-28T01:23:36.477331Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4593.0.0' Jan 28 01:23:36.481143 waagent[2655]: 2026-01-28T01:23:36.480936Z INFO Daemon Daemon Using waagent for provisioning Jan 28 01:23:36.482505 waagent[2655]: 2026-01-28T01:23:36.482459Z INFO Daemon Daemon Activate resource disk Jan 28 01:23:36.484953 waagent[2655]: 2026-01-28T01:23:36.484915Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 01:23:36.488488 waagent[2655]: 2026-01-28T01:23:36.488349Z INFO Daemon Daemon Found device: None Jan 28 01:23:36.489843 waagent[2655]: 2026-01-28T01:23:36.489796Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 01:23:36.493813 waagent[2655]: 2026-01-28T01:23:36.491791Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 01:23:36.496417 waagent[2655]: 2026-01-28T01:23:36.496281Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:23:36.497503 waagent[2655]: 2026-01-28T01:23:36.497471Z INFO Daemon Daemon Running default provisioning handler Jan 28 01:23:36.505066 waagent[2655]: 2026-01-28T01:23:36.504094Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 01:23:36.508412 waagent[2655]: 2026-01-28T01:23:36.508380Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 01:23:36.512162 waagent[2655]: 2026-01-28T01:23:36.512119Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 01:23:36.514959 waagent[2655]: 2026-01-28T01:23:36.514918Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 01:23:36.566595 containerd[2559]: time="2026-01-28T01:23:36.566563609Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:23:36.566710 containerd[2559]: time="2026-01-28T01:23:36.566699730Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:23:36.566954 containerd[2559]: time="2026-01-28T01:23:36.566931626Z" level=info msg="Start subscribing containerd event" Jan 28 01:23:36.567253 containerd[2559]: time="2026-01-28T01:23:36.567226760Z" level=info msg="Start recovering state" Jan 28 01:23:36.567379 containerd[2559]: time="2026-01-28T01:23:36.567360906Z" level=info msg="Start event monitor" Jan 28 01:23:36.567379 containerd[2559]: time="2026-01-28T01:23:36.567375417Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:23:36.567444 containerd[2559]: time="2026-01-28T01:23:36.567383201Z" level=info msg="Start streaming server" Jan 28 01:23:36.567444 containerd[2559]: time="2026-01-28T01:23:36.567391560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 01:23:36.567444 containerd[2559]: time="2026-01-28T01:23:36.567399462Z" level=info msg="runtime interface starting up..." Jan 28 01:23:36.567444 containerd[2559]: time="2026-01-28T01:23:36.567405467Z" level=info msg="starting plugins..." Jan 28 01:23:36.567444 containerd[2559]: time="2026-01-28T01:23:36.567416472Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 01:23:36.567649 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:23:36.569146 containerd[2559]: time="2026-01-28T01:23:36.567766799Z" level=info msg="containerd successfully booted in 0.454755s" Jan 28 01:23:36.569491 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:23:36.571512 systemd[1]: Startup finished in 4.190s (kernel) + 10.474s (initrd) + 24.165s (userspace) = 38.830s. Jan 28 01:23:36.610041 waagent[2655]: 2026-01-28T01:23:36.609999Z INFO Daemon Daemon Successfully mounted dvd Jan 28 01:23:36.638123 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 01:23:36.641873 waagent[2655]: 2026-01-28T01:23:36.640244Z INFO Daemon Daemon Detect protocol endpoint Jan 28 01:23:36.641873 waagent[2655]: 2026-01-28T01:23:36.640389Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 01:23:36.641873 waagent[2655]: 2026-01-28T01:23:36.640643Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 01:23:36.641873 waagent[2655]: 2026-01-28T01:23:36.640917Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 01:23:36.641873 waagent[2655]: 2026-01-28T01:23:36.641336Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 01:23:36.641873 waagent[2655]: 2026-01-28T01:23:36.641712Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 01:23:36.659221 waagent[2655]: 2026-01-28T01:23:36.659192Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 01:23:36.661411 waagent[2655]: 2026-01-28T01:23:36.660954Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 01:23:36.661411 waagent[2655]: 2026-01-28T01:23:36.661199Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 01:23:36.740021 waagent[2655]: 2026-01-28T01:23:36.739938Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 01:23:36.742223 waagent[2655]: 2026-01-28T01:23:36.741824Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 01:23:36.746244 waagent[2655]: 2026-01-28T01:23:36.746209Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:23:36.759364 waagent[2655]: 2026-01-28T01:23:36.759328Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 01:23:36.762891 waagent[2655]: 2026-01-28T01:23:36.759829Z INFO Daemon Jan 28 01:23:36.762891 waagent[2655]: 2026-01-28T01:23:36.760364Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e92f9ba8-cf82-499e-baf4-28d9d31f3267 eTag: 10949537174617192237 source: Fabric] Jan 28 01:23:36.762891 waagent[2655]: 2026-01-28T01:23:36.760824Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 01:23:36.762891 waagent[2655]: 2026-01-28T01:23:36.761142Z INFO Daemon Jan 28 01:23:36.762891 waagent[2655]: 2026-01-28T01:23:36.761500Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:23:36.767897 waagent[2655]: 2026-01-28T01:23:36.767034Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 01:23:36.837110 kubelet[2689]: E0128 01:23:36.837084 2689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:23:36.838728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:23:36.838850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:23:36.839800 systemd[1]: kubelet.service: Consumed 790ms CPU time, 257.3M memory peak. Jan 28 01:23:36.840084 login[2661]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:36.845660 waagent[2655]: 2026-01-28T01:23:36.844964Z INFO Daemon Downloaded certificate {'thumbprint': '9A5F3FDC7906C2EB29284FCAC50EC4D481E57E2A', 'hasPrivateKey': True} Jan 28 01:23:36.847960 waagent[2655]: 2026-01-28T01:23:36.847916Z INFO Daemon Fetch goal state completed Jan 28 01:23:36.849488 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:23:36.851436 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:23:36.853538 systemd-logind[2535]: New session 1 of user core. Jan 28 01:23:36.863673 waagent[2655]: 2026-01-28T01:23:36.863643Z INFO Daemon Daemon Starting provisioning Jan 28 01:23:36.863968 waagent[2655]: 2026-01-28T01:23:36.863788Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 01:23:36.863968 waagent[2655]: 2026-01-28T01:23:36.863989Z INFO Daemon Daemon Set hostname [ci-4593.0.0-n-2270f1152e] Jan 28 01:23:36.877522 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:23:36.878792 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:23:36.882283 waagent[2655]: 2026-01-28T01:23:36.882244Z INFO Daemon Daemon Publish hostname [ci-4593.0.0-n-2270f1152e] Jan 28 01:23:36.883911 waagent[2655]: 2026-01-28T01:23:36.882500Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 01:23:36.883911 waagent[2655]: 2026-01-28T01:23:36.882790Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 01:23:36.890660 systemd-networkd[2200]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 01:23:36.890667 systemd-networkd[2200]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:23:36.890715 systemd-networkd[2200]: eth0: DHCP lease lost Jan 28 01:23:36.894613 (systemd)[2723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:36.896326 systemd-logind[2535]: New session 2 of user core. Jan 28 01:23:36.901971 waagent[2655]: 2026-01-28T01:23:36.901930Z INFO Daemon Daemon Create user account if not exists Jan 28 01:23:36.902251 waagent[2655]: 2026-01-28T01:23:36.902115Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 01:23:36.902251 waagent[2655]: 2026-01-28T01:23:36.902259Z INFO Daemon Daemon Configure sudoer Jan 28 01:23:36.905891 systemd-networkd[2200]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 28 01:23:36.907117 waagent[2655]: 2026-01-28T01:23:36.907027Z INFO Daemon Daemon Configure sshd Jan 28 01:23:36.910903 waagent[2655]: 2026-01-28T01:23:36.910872Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 01:23:36.913849 waagent[2655]: 2026-01-28T01:23:36.913811Z INFO Daemon Daemon Deploy ssh public key. Jan 28 01:23:36.931947 login[2662]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:36.937926 systemd-logind[2535]: New session 3 of user core. Jan 28 01:23:37.064354 systemd[2723]: Queued start job for default target default.target. Jan 28 01:23:37.070440 systemd[2723]: Created slice app.slice - User Application Slice. Jan 28 01:23:37.070467 systemd[2723]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 01:23:37.070479 systemd[2723]: Reached target paths.target - Paths. Jan 28 01:23:37.070514 systemd[2723]: Reached target timers.target - Timers. Jan 28 01:23:37.071304 systemd[2723]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:23:37.073997 systemd[2723]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 01:23:37.086141 systemd[2723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:23:37.086390 systemd[2723]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 01:23:37.087015 systemd[2723]: Reached target sockets.target - Sockets. Jan 28 01:23:37.087121 systemd[2723]: Reached target basic.target - Basic System. Jan 28 01:23:37.087164 systemd[2723]: Reached target default.target - Main User Target. Jan 28 01:23:37.087184 systemd[2723]: Startup finished in 187ms. Jan 28 01:23:37.087289 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:23:37.094071 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:23:37.094957 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:23:37.991762 waagent[2655]: 2026-01-28T01:23:37.991704Z INFO Daemon Daemon Provisioning complete Jan 28 01:23:38.001450 waagent[2655]: 2026-01-28T01:23:38.001417Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 01:23:38.001743 waagent[2655]: 2026-01-28T01:23:38.001613Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 01:23:38.004078 waagent[2655]: 2026-01-28T01:23:38.001786Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 28 01:23:38.103987 waagent[2763]: 2026-01-28T01:23:38.103927Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 28 01:23:38.104199 waagent[2763]: 2026-01-28T01:23:38.104019Z INFO ExtHandler ExtHandler OS: flatcar 4593.0.0 Jan 28 01:23:38.104199 waagent[2763]: 2026-01-28T01:23:38.104067Z INFO ExtHandler ExtHandler Python: 3.12.11 Jan 28 01:23:38.104199 waagent[2763]: 2026-01-28T01:23:38.104105Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 28 01:23:38.150640 waagent[2763]: 2026-01-28T01:23:38.150592Z INFO ExtHandler ExtHandler Distro: flatcar-4593.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.12.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 28 01:23:38.150785 waagent[2763]: 2026-01-28T01:23:38.150758Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:23:38.150843 waagent[2763]: 2026-01-28T01:23:38.150818Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:23:38.157653 waagent[2763]: 2026-01-28T01:23:38.157591Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 01:23:38.170966 waagent[2763]: 2026-01-28T01:23:38.170931Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 01:23:38.171328 waagent[2763]: 2026-01-28T01:23:38.171293Z INFO ExtHandler Jan 28 01:23:38.171371 waagent[2763]: 2026-01-28T01:23:38.171356Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 653c5c02-2ab1-49f5-a8fd-051302ad5a07 eTag: 10949537174617192237 source: Fabric] Jan 28 01:23:38.171593 waagent[2763]: 2026-01-28T01:23:38.171563Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 01:23:38.171981 waagent[2763]: 2026-01-28T01:23:38.171946Z INFO ExtHandler Jan 28 01:23:38.172027 waagent[2763]: 2026-01-28T01:23:38.172001Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 01:23:38.175136 waagent[2763]: 2026-01-28T01:23:38.175105Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 01:23:38.236295 waagent[2763]: 2026-01-28T01:23:38.236247Z INFO ExtHandler Downloaded certificate {'thumbprint': '9A5F3FDC7906C2EB29284FCAC50EC4D481E57E2A', 'hasPrivateKey': True} Jan 28 01:23:38.236627 waagent[2763]: 2026-01-28T01:23:38.236596Z INFO ExtHandler Fetch goal state completed Jan 28 01:23:38.254335 waagent[2763]: 2026-01-28T01:23:38.254263Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 28 01:23:38.258319 waagent[2763]: 2026-01-28T01:23:38.258277Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2763 Jan 28 01:23:38.258429 waagent[2763]: 2026-01-28T01:23:38.258407Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 01:23:38.258643 waagent[2763]: 2026-01-28T01:23:38.258622Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 28 01:23:38.259705 waagent[2763]: 2026-01-28T01:23:38.259671Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4593.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 01:23:38.260039 waagent[2763]: 2026-01-28T01:23:38.260009Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4593.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 28 01:23:38.260154 waagent[2763]: 2026-01-28T01:23:38.260129Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 28 01:23:38.260577 waagent[2763]: 2026-01-28T01:23:38.260551Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 01:23:38.279429 waagent[2763]: 2026-01-28T01:23:38.279405Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 01:23:38.279550 waagent[2763]: 2026-01-28T01:23:38.279529Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 01:23:38.284898 waagent[2763]: 2026-01-28T01:23:38.284723Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 01:23:38.289738 systemd[1]: Reload requested from client PID 2778 ('systemctl') (unit waagent.service)... Jan 28 01:23:38.289752 systemd[1]: Reloading... Jan 28 01:23:38.362881 zram_generator::config[2820]: No configuration found. Jan 28 01:23:38.527188 systemd[1]: Reloading finished in 237 ms. Jan 28 01:23:38.541586 waagent[2763]: 2026-01-28T01:23:38.541041Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 01:23:38.541586 waagent[2763]: 2026-01-28T01:23:38.541137Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 01:23:38.664028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#97 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 28 01:23:39.126693 waagent[2763]: 2026-01-28T01:23:39.126628Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 01:23:39.127034 waagent[2763]: 2026-01-28T01:23:39.126984Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 28 01:23:39.127674 waagent[2763]: 2026-01-28T01:23:39.127644Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 01:23:39.127921 waagent[2763]: 2026-01-28T01:23:39.127891Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 01:23:39.128122 waagent[2763]: 2026-01-28T01:23:39.128055Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:23:39.128205 waagent[2763]: 2026-01-28T01:23:39.128184Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 01:23:39.128273 waagent[2763]: 2026-01-28T01:23:39.128243Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:23:39.128336 waagent[2763]: 2026-01-28T01:23:39.128316Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 01:23:39.128534 waagent[2763]: 2026-01-28T01:23:39.128513Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 01:23:39.128715 waagent[2763]: 2026-01-28T01:23:39.128679Z INFO EnvHandler ExtHandler Configure routes Jan 28 01:23:39.128765 waagent[2763]: 2026-01-28T01:23:39.128742Z INFO EnvHandler ExtHandler Gateway:None Jan 28 01:23:39.128906 waagent[2763]: 2026-01-28T01:23:39.128853Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 01:23:39.129050 waagent[2763]: 2026-01-28T01:23:39.129018Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 01:23:39.129302 waagent[2763]: 2026-01-28T01:23:39.129277Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 01:23:39.129376 waagent[2763]: 2026-01-28T01:23:39.129359Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 01:23:39.129501 waagent[2763]: 2026-01-28T01:23:39.129467Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 01:23:39.129501 waagent[2763]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 01:23:39.129501 waagent[2763]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 01:23:39.129501 waagent[2763]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 01:23:39.129501 waagent[2763]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:23:39.129501 waagent[2763]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:23:39.129501 waagent[2763]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 01:23:39.129693 waagent[2763]: 2026-01-28T01:23:39.129670Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 01:23:39.129874 waagent[2763]: 2026-01-28T01:23:39.129797Z INFO EnvHandler ExtHandler Routes:None Jan 28 01:23:39.136221 waagent[2763]: 2026-01-28T01:23:39.136190Z INFO ExtHandler ExtHandler Jan 28 01:23:39.136280 waagent[2763]: 2026-01-28T01:23:39.136255Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 24b650d7-ce76-4115-b0b5-e14d1ae31de7 correlation d360e74d-93e2-4792-9eb6-7c85e3d49f7b created: 2026-01-28T01:22:38.249664Z] Jan 28 01:23:39.136503 waagent[2763]: 2026-01-28T01:23:39.136480Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 01:23:39.136902 waagent[2763]: 2026-01-28T01:23:39.136851Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 28 01:23:39.162056 waagent[2763]: 2026-01-28T01:23:39.162014Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 28 01:23:39.162056 waagent[2763]: Try `iptables -h' or 'iptables --help' for more information.) Jan 28 01:23:39.162329 waagent[2763]: 2026-01-28T01:23:39.162302Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 22DB78E2-D90A-47A0-A6D4-D003E23BE7D5;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 28 01:23:39.177673 waagent[2763]: 2026-01-28T01:23:39.177628Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 01:23:39.177673 waagent[2763]: Executing ['ip', '-a', '-o', 'link']: Jan 28 01:23:39.177673 waagent[2763]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 01:23:39.177673 waagent[2763]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:5c:ef brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7c1e52345cef Jan 28 01:23:39.177673 waagent[2763]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:34:5c:ef brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 28 01:23:39.177673 waagent[2763]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 01:23:39.177673 waagent[2763]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 01:23:39.177673 waagent[2763]: 2: eth0 inet 10.200.8.14/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 01:23:39.177673 waagent[2763]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 01:23:39.177673 waagent[2763]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 01:23:39.177673 waagent[2763]: 2: eth0 inet6 fe80::7e1e:52ff:fe34:5cef/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 01:23:39.247249 waagent[2763]: 2026-01-28T01:23:39.247205Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 28 01:23:39.247249 waagent[2763]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:23:39.247249 waagent[2763]: pkts bytes target prot opt in out source destination Jan 28 01:23:39.247249 waagent[2763]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:23:39.247249 waagent[2763]: pkts bytes target prot opt in out source destination Jan 28 01:23:39.247249 waagent[2763]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:23:39.247249 waagent[2763]: pkts bytes target prot opt in out source destination Jan 28 01:23:39.247249 waagent[2763]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:23:39.247249 waagent[2763]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:23:39.247249 waagent[2763]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:23:39.249580 waagent[2763]: 2026-01-28T01:23:39.249539Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 01:23:39.249580 waagent[2763]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:23:39.249580 waagent[2763]: pkts bytes target prot opt in out source destination Jan 28 01:23:39.249580 waagent[2763]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:23:39.249580 waagent[2763]: pkts bytes target prot opt in out source destination Jan 28 01:23:39.249580 waagent[2763]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 01:23:39.249580 waagent[2763]: pkts bytes target prot opt in out source destination Jan 28 01:23:39.249580 waagent[2763]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 01:23:39.249580 waagent[2763]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 01:23:39.249580 waagent[2763]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 01:23:47.089623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:23:47.090999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:23:47.511733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:23:47.521120 (kubelet)[2918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:23:47.554795 kubelet[2918]: E0128 01:23:47.554768 2918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:23:47.557098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:23:47.557220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:23:47.557572 systemd[1]: kubelet.service: Consumed 124ms CPU time, 110.8M memory peak. Jan 28 01:23:54.445224 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:23:54.448152 systemd[1]: Started sshd@0-10.200.8.14:22-10.200.16.10:46972.service - OpenSSH per-connection server daemon (10.200.16.10:46972). Jan 28 01:23:55.111105 sshd[2926]: Accepted publickey for core from 10.200.16.10 port 46972 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:23:55.112240 sshd-session[2926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:55.116615 systemd-logind[2535]: New session 4 of user core. Jan 28 01:23:55.123039 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:23:55.520192 systemd[1]: Started sshd@1-10.200.8.14:22-10.200.16.10:46984.service - OpenSSH per-connection server daemon (10.200.16.10:46984). Jan 28 01:23:56.049386 sshd[2933]: Accepted publickey for core from 10.200.16.10 port 46984 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:23:56.050271 sshd-session[2933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:56.054194 systemd-logind[2535]: New session 5 of user core. Jan 28 01:23:56.056984 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:23:56.351554 sshd[2937]: Connection closed by 10.200.16.10 port 46984 Jan 28 01:23:56.352046 sshd-session[2933]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:56.355334 systemd[1]: sshd@1-10.200.8.14:22-10.200.16.10:46984.service: Deactivated successfully. Jan 28 01:23:56.356646 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:23:56.357283 systemd-logind[2535]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:23:56.358427 systemd-logind[2535]: Removed session 5. Jan 28 01:23:56.467176 systemd[1]: Started sshd@2-10.200.8.14:22-10.200.16.10:47000.service - OpenSSH per-connection server daemon (10.200.16.10:47000). Jan 28 01:23:57.001787 sshd[2943]: Accepted publickey for core from 10.200.16.10 port 47000 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:23:57.002823 sshd-session[2943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:57.006876 systemd-logind[2535]: New session 6 of user core. Jan 28 01:23:57.014006 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:23:57.302854 sshd[2947]: Connection closed by 10.200.16.10 port 47000 Jan 28 01:23:57.303419 sshd-session[2943]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:57.306501 systemd-logind[2535]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:23:57.306846 systemd[1]: sshd@2-10.200.8.14:22-10.200.16.10:47000.service: Deactivated successfully. Jan 28 01:23:57.308178 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:23:57.309565 systemd-logind[2535]: Removed session 6. Jan 28 01:23:57.424435 systemd[1]: Started sshd@3-10.200.8.14:22-10.200.16.10:47012.service - OpenSSH per-connection server daemon (10.200.16.10:47012). Jan 28 01:23:57.721744 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:23:57.723473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:23:57.957677 sshd[2953]: Accepted publickey for core from 10.200.16.10 port 47012 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:23:57.958630 sshd-session[2953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:57.962609 systemd-logind[2535]: New session 7 of user core. Jan 28 01:23:57.968002 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:23:58.199665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:23:58.205114 (kubelet)[2967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:23:58.238287 kubelet[2967]: E0128 01:23:58.238260 2967 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:23:58.239772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:23:58.239957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:23:58.240342 systemd[1]: kubelet.service: Consumed 120ms CPU time, 110.3M memory peak. Jan 28 01:23:58.260382 sshd[2960]: Connection closed by 10.200.16.10 port 47012 Jan 28 01:23:58.261840 sshd-session[2953]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:58.264459 systemd[1]: sshd@3-10.200.8.14:22-10.200.16.10:47012.service: Deactivated successfully. Jan 28 01:23:58.265796 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:23:58.266454 systemd-logind[2535]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:23:58.267479 systemd-logind[2535]: Removed session 7. Jan 28 01:23:58.373079 systemd[1]: Started sshd@4-10.200.8.14:22-10.200.16.10:47024.service - OpenSSH per-connection server daemon (10.200.16.10:47024). Jan 28 01:23:58.662314 chronyd[2514]: Selected source PHC0 Jan 28 01:23:58.904514 sshd[2978]: Accepted publickey for core from 10.200.16.10 port 47024 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:23:58.905465 sshd-session[2978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:23:58.908934 systemd-logind[2535]: New session 8 of user core. Jan 28 01:23:58.913977 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:23:59.289328 sudo[2983]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:23:59.289567 sudo[2983]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:23:59.301419 sudo[2983]: pam_unix(sudo:session): session closed for user root Jan 28 01:23:59.400616 sshd[2982]: Connection closed by 10.200.16.10 port 47024 Jan 28 01:23:59.401127 sshd-session[2978]: pam_unix(sshd:session): session closed for user core Jan 28 01:23:59.404182 systemd[1]: sshd@4-10.200.8.14:22-10.200.16.10:47024.service: Deactivated successfully. Jan 28 01:23:59.405798 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:23:59.406440 systemd-logind[2535]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:23:59.407434 systemd-logind[2535]: Removed session 8. Jan 28 01:23:59.521509 systemd[1]: Started sshd@5-10.200.8.14:22-10.200.16.10:47040.service - OpenSSH per-connection server daemon (10.200.16.10:47040). Jan 28 01:24:00.063143 sshd[2990]: Accepted publickey for core from 10.200.16.10 port 47040 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:24:00.064101 sshd-session[2990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:00.068243 systemd-logind[2535]: New session 9 of user core. Jan 28 01:24:00.081013 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:24:00.266546 sudo[2996]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:24:00.266783 sudo[2996]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:24:00.270351 sudo[2996]: pam_unix(sudo:session): session closed for user root Jan 28 01:24:00.274920 sudo[2995]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 01:24:00.275128 sudo[2995]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:24:00.280609 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:24:00.309000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:24:00.311017 augenrules[3020]: No rules Jan 28 01:24:00.309000 audit[3020]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5840e630 a2=420 a3=0 items=0 ppid=3001 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.313220 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:24:00.314204 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:24:00.316285 sudo[2995]: pam_unix(sudo:session): session closed for user root Jan 28 01:24:00.317479 kernel: audit: type=1305 audit(1769563440.309:243): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 01:24:00.317521 kernel: audit: type=1300 audit(1769563440.309:243): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5840e630 a2=420 a3=0 items=0 ppid=3001 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:00.319133 kernel: audit: type=1327 audit(1769563440.309:243): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:24:00.309000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 01:24:00.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.321665 kernel: audit: type=1130 audit(1769563440.311:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.324049 kernel: audit: type=1131 audit(1769563440.311:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.326774 kernel: audit: type=1106 audit(1769563440.311:246): pid=2995 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.311000 audit[2995]: USER_END pid=2995 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.311000 audit[2995]: CRED_DISP pid=2995 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.329870 kernel: audit: type=1104 audit(1769563440.311:247): pid=2995 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.415519 sshd[2994]: Connection closed by 10.200.16.10 port 47040 Jan 28 01:24:00.415985 sshd-session[2990]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:00.416000 audit[2990]: USER_END pid=2990 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:00.421334 systemd[1]: sshd@5-10.200.8.14:22-10.200.16.10:47040.service: Deactivated successfully. Jan 28 01:24:00.421884 kernel: audit: type=1106 audit(1769563440.416:248): pid=2990 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:00.416000 audit[2990]: CRED_DISP pid=2990 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:00.422658 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:24:00.423602 systemd-logind[2535]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:24:00.425745 systemd-logind[2535]: Removed session 9. Jan 28 01:24:00.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.14:22-10.200.16.10:47040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.429401 kernel: audit: type=1104 audit(1769563440.416:249): pid=2990 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:00.429432 kernel: audit: type=1131 audit(1769563440.420:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.14:22-10.200.16.10:47040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:00.528738 systemd[1]: Started sshd@6-10.200.8.14:22-10.200.16.10:32992.service - OpenSSH per-connection server daemon (10.200.16.10:32992). Jan 28 01:24:00.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.14:22-10.200.16.10:32992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:01.053000 audit[3029]: USER_ACCT pid=3029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:01.054477 sshd[3029]: Accepted publickey for core from 10.200.16.10 port 32992 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:24:01.054000 audit[3029]: CRED_ACQ pid=3029 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:01.054000 audit[3029]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2b4982b0 a2=3 a3=0 items=0 ppid=1 pid=3029 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:01.054000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:24:01.055536 sshd-session[3029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:24:01.058961 systemd-logind[2535]: New session 10 of user core. Jan 28 01:24:01.068008 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:24:01.069000 audit[3029]: USER_START pid=3029 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:01.070000 audit[3033]: CRED_ACQ pid=3033 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:01.256000 audit[3034]: USER_ACCT pid=3034 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:01.257272 sudo[3034]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:24:01.256000 audit[3034]: CRED_REFR pid=3034 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:01.256000 audit[3034]: USER_START pid=3034 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:01.257524 sudo[3034]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:24:03.066560 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:24:03.083110 (dockerd)[3054]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:24:04.221179 dockerd[3054]: time="2026-01-28T01:24:04.221121835Z" level=info msg="Starting up" Jan 28 01:24:04.221771 dockerd[3054]: time="2026-01-28T01:24:04.221748103Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 01:24:04.230372 dockerd[3054]: time="2026-01-28T01:24:04.230340846Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 01:24:04.292314 dockerd[3054]: time="2026-01-28T01:24:04.292176619Z" level=info msg="Loading containers: start." Jan 28 01:24:04.305368 kernel: Initializing XFRM netlink socket Jan 28 01:24:04.338000 audit[3100]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.338000 audit[3100]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd4d4239f0 a2=0 a3=0 items=0 ppid=3054 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:24:04.340000 audit[3102]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3102 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.340000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffa0f27630 a2=0 a3=0 items=0 ppid=3054 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:24:04.341000 audit[3104]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.341000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0ef0d430 a2=0 a3=0 items=0 ppid=3054 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:24:04.343000 audit[3106]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.343000 audit[3106]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb4ec14e0 a2=0 a3=0 items=0 ppid=3054 pid=3106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:24:04.344000 audit[3108]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.344000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd47e38a60 a2=0 a3=0 items=0 ppid=3054 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:24:04.346000 audit[3110]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.346000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffdf8f7b20 a2=0 a3=0 items=0 ppid=3054 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:24:04.347000 audit[3112]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.347000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffde0f94ec0 a2=0 a3=0 items=0 ppid=3054 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.347000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:24:04.349000 audit[3114]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.349000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffea8cc9a90 a2=0 a3=0 items=0 ppid=3054 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.349000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:24:04.384000 audit[3117]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.384000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffde21cf750 a2=0 a3=0 items=0 ppid=3054 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.384000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 01:24:04.386000 audit[3119]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.386000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcf4548030 a2=0 a3=0 items=0 ppid=3054 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:24:04.388000 audit[3121]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.388000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdcb8d5840 a2=0 a3=0 items=0 ppid=3054 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:24:04.390000 audit[3123]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.390000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd7f591030 a2=0 a3=0 items=0 ppid=3054 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.390000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:24:04.391000 audit[3125]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.391000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff5e39c8d0 a2=0 a3=0 items=0 ppid=3054 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:24:04.447000 audit[3155]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.447000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff47286d00 a2=0 a3=0 items=0 ppid=3054 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.447000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 01:24:04.449000 audit[3157]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.449000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcc31b6070 a2=0 a3=0 items=0 ppid=3054 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 01:24:04.450000 audit[3159]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.450000 audit[3159]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff42cf1490 a2=0 a3=0 items=0 ppid=3054 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.450000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 01:24:04.452000 audit[3161]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.452000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfd15c580 a2=0 a3=0 items=0 ppid=3054 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 01:24:04.453000 audit[3163]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.453000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcf4a88270 a2=0 a3=0 items=0 ppid=3054 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.453000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 01:24:04.455000 audit[3165]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.455000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdb66d0c60 a2=0 a3=0 items=0 ppid=3054 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.455000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:24:04.456000 audit[3167]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.456000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd32f03cb0 a2=0 a3=0 items=0 ppid=3054 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.456000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:24:04.458000 audit[3169]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.458000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdf7acb280 a2=0 a3=0 items=0 ppid=3054 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.458000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 01:24:04.460000 audit[3171]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.460000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc9884c0c0 a2=0 a3=0 items=0 ppid=3054 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 01:24:04.461000 audit[3173]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.461000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffdda13350 a2=0 a3=0 items=0 ppid=3054 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.461000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 01:24:04.463000 audit[3175]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.463000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc25ee1450 a2=0 a3=0 items=0 ppid=3054 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 01:24:04.465000 audit[3177]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.465000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe90534a60 a2=0 a3=0 items=0 ppid=3054 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.465000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 01:24:04.467000 audit[3179]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.467000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffff6eb19f0 a2=0 a3=0 items=0 ppid=3054 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 01:24:04.471000 audit[3184]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.471000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff24507c00 a2=0 a3=0 items=0 ppid=3054 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.471000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:24:04.473000 audit[3186]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.473000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffef3903210 a2=0 a3=0 items=0 ppid=3054 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.473000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:24:04.475000 audit[3188]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.475000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd1a959810 a2=0 a3=0 items=0 ppid=3054 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.475000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:24:04.476000 audit[3190]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.476000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe900d5850 a2=0 a3=0 items=0 ppid=3054 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.476000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 01:24:04.478000 audit[3192]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.478000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc67c728f0 a2=0 a3=0 items=0 ppid=3054 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 01:24:04.480000 audit[3194]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:04.480000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd056cb470 a2=0 a3=0 items=0 ppid=3054 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 01:24:04.530000 audit[3199]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.530000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffff93c6230 a2=0 a3=0 items=0 ppid=3054 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.530000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 01:24:04.532000 audit[3201]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.532000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffbbfd5950 a2=0 a3=0 items=0 ppid=3054 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.532000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 01:24:04.539000 audit[3209]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.539000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcf6109930 a2=0 a3=0 items=0 ppid=3054 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.539000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 01:24:04.543000 audit[3214]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.543000 audit[3214]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd1a82c1b0 a2=0 a3=0 items=0 ppid=3054 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 01:24:04.545000 audit[3216]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.545000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe763f6880 a2=0 a3=0 items=0 ppid=3054 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.545000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 01:24:04.547000 audit[3218]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.547000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc17f9af70 a2=0 a3=0 items=0 ppid=3054 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.547000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 01:24:04.549000 audit[3220]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.549000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffdb9a76c0 a2=0 a3=0 items=0 ppid=3054 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.549000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 01:24:04.551000 audit[3222]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:04.551000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdbd8d25c0 a2=0 a3=0 items=0 ppid=3054 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:04.551000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 01:24:04.552463 systemd-networkd[2200]: docker0: Link UP Jan 28 01:24:04.569058 dockerd[3054]: time="2026-01-28T01:24:04.569030631Z" level=info msg="Loading containers: done." Jan 28 01:24:04.579744 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck728436538-merged.mount: Deactivated successfully. Jan 28 01:24:04.635453 dockerd[3054]: time="2026-01-28T01:24:04.635418329Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:24:04.635565 dockerd[3054]: time="2026-01-28T01:24:04.635487510Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 01:24:04.635565 dockerd[3054]: time="2026-01-28T01:24:04.635553235Z" level=info msg="Initializing buildkit" Jan 28 01:24:04.675059 dockerd[3054]: time="2026-01-28T01:24:04.675036176Z" level=info msg="Completed buildkit initialization" Jan 28 01:24:04.677425 dockerd[3054]: time="2026-01-28T01:24:04.677400973Z" level=info msg="Daemon has completed initialization" Jan 28 01:24:04.677724 dockerd[3054]: time="2026-01-28T01:24:04.677437547Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:24:04.677626 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:24:04.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:05.781468 containerd[2559]: time="2026-01-28T01:24:05.781430436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 28 01:24:07.033889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829061068.mount: Deactivated successfully. Jan 28 01:24:07.879106 containerd[2559]: time="2026-01-28T01:24:07.879065406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:07.881401 containerd[2559]: time="2026-01-28T01:24:07.881292496Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25430759" Jan 28 01:24:07.884116 containerd[2559]: time="2026-01-28T01:24:07.884095682Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:07.887582 containerd[2559]: time="2026-01-28T01:24:07.887558491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:07.888594 containerd[2559]: time="2026-01-28T01:24:07.888158266Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.106694682s" Jan 28 01:24:07.888594 containerd[2559]: time="2026-01-28T01:24:07.888187008Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 28 01:24:07.888881 containerd[2559]: time="2026-01-28T01:24:07.888777665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 28 01:24:08.471731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:24:08.473340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:09.046745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:09.051467 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 28 01:24:09.051545 kernel: audit: type=1130 audit(1769563449.046:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:09.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:09.051953 (kubelet)[3326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:24:09.085488 kubelet[3326]: E0128 01:24:09.085462 3326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:24:09.086929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:24:09.087037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:24:09.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:24:09.087326 systemd[1]: kubelet.service: Consumed 124ms CPU time, 109.7M memory peak. Jan 28 01:24:09.090876 kernel: audit: type=1131 audit(1769563449.086:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:24:09.441121 containerd[2559]: time="2026-01-28T01:24:09.441038388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:09.444323 containerd[2559]: time="2026-01-28T01:24:09.444211862Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 28 01:24:09.448427 containerd[2559]: time="2026-01-28T01:24:09.448406833Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:09.453810 containerd[2559]: time="2026-01-28T01:24:09.453786660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:09.454521 containerd[2559]: time="2026-01-28T01:24:09.454389766Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.565490402s" Jan 28 01:24:09.454521 containerd[2559]: time="2026-01-28T01:24:09.454416013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 28 01:24:09.454939 containerd[2559]: time="2026-01-28T01:24:09.454913396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 28 01:24:10.545930 containerd[2559]: time="2026-01-28T01:24:10.545885196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:10.548033 containerd[2559]: time="2026-01-28T01:24:10.548004253Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 28 01:24:10.550363 containerd[2559]: time="2026-01-28T01:24:10.550330146Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:10.553587 containerd[2559]: time="2026-01-28T01:24:10.553435130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:10.553978 containerd[2559]: time="2026-01-28T01:24:10.553958801Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.099010842s" Jan 28 01:24:10.554012 containerd[2559]: time="2026-01-28T01:24:10.553987503Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 28 01:24:10.554423 containerd[2559]: time="2026-01-28T01:24:10.554384530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 28 01:24:11.445426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836960129.mount: Deactivated successfully. Jan 28 01:24:11.700755 containerd[2559]: time="2026-01-28T01:24:11.700666908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:11.702733 containerd[2559]: time="2026-01-28T01:24:11.702701581Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Jan 28 01:24:11.705150 containerd[2559]: time="2026-01-28T01:24:11.705116476Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:11.708615 containerd[2559]: time="2026-01-28T01:24:11.708578398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:11.708918 containerd[2559]: time="2026-01-28T01:24:11.708867371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.154448907s" Jan 28 01:24:11.708918 containerd[2559]: time="2026-01-28T01:24:11.708897097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 28 01:24:11.709366 containerd[2559]: time="2026-01-28T01:24:11.709341668Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 28 01:24:12.456043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107580053.mount: Deactivated successfully. Jan 28 01:24:13.257561 containerd[2559]: time="2026-01-28T01:24:13.257518293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:13.259656 containerd[2559]: time="2026-01-28T01:24:13.259540111Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568586" Jan 28 01:24:13.261899 containerd[2559]: time="2026-01-28T01:24:13.261878637Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:13.265072 containerd[2559]: time="2026-01-28T01:24:13.265045818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:13.265766 containerd[2559]: time="2026-01-28T01:24:13.265740955Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.556376144s" Jan 28 01:24:13.265812 containerd[2559]: time="2026-01-28T01:24:13.265766121Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 28 01:24:13.266349 containerd[2559]: time="2026-01-28T01:24:13.266330705Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 28 01:24:13.333507 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 28 01:24:13.771935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901718646.mount: Deactivated successfully. Jan 28 01:24:13.786052 containerd[2559]: time="2026-01-28T01:24:13.786019032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:13.788043 containerd[2559]: time="2026-01-28T01:24:13.787959956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 28 01:24:13.790177 containerd[2559]: time="2026-01-28T01:24:13.790159221Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:13.793444 containerd[2559]: time="2026-01-28T01:24:13.793410614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:13.793995 containerd[2559]: time="2026-01-28T01:24:13.793732053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 527.376414ms" Jan 28 01:24:13.793995 containerd[2559]: time="2026-01-28T01:24:13.793755816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 28 01:24:13.794251 containerd[2559]: time="2026-01-28T01:24:13.794228039Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 28 01:24:14.301069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount199291919.mount: Deactivated successfully. Jan 28 01:24:16.845313 containerd[2559]: time="2026-01-28T01:24:16.845265869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:16.849483 containerd[2559]: time="2026-01-28T01:24:16.849361118Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Jan 28 01:24:16.851785 containerd[2559]: time="2026-01-28T01:24:16.851762605Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:16.855110 containerd[2559]: time="2026-01-28T01:24:16.855081771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:16.855870 containerd[2559]: time="2026-01-28T01:24:16.855841171Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.061589337s" Jan 28 01:24:16.855927 containerd[2559]: time="2026-01-28T01:24:16.855876902Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 28 01:24:19.221913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 01:24:19.225052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:19.379138 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:24:19.379204 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:24:19.379434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:19.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:24:19.383877 kernel: audit: type=1130 audit(1769563459.378:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:24:19.384583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:19.406373 systemd[1]: Reload requested from client PID 3492 ('systemctl') (unit session-10.scope)... Jan 28 01:24:19.406383 systemd[1]: Reloading... Jan 28 01:24:19.481875 zram_generator::config[3538]: No configuration found. Jan 28 01:24:19.678951 systemd[1]: Reloading finished in 272 ms. Jan 28 01:24:19.774460 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:24:19.774544 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:24:19.774812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:19.774884 systemd[1]: kubelet.service: Consumed 62ms CPU time, 69.7M memory peak. Jan 28 01:24:19.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:24:19.778878 kernel: audit: type=1130 audit(1769563459.773:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 01:24:19.779959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:19.779000 audit: BPF prog-id=87 op=LOAD Jan 28 01:24:19.782800 kernel: audit: type=1334 audit(1769563459.779:305): prog-id=87 op=LOAD Jan 28 01:24:19.782852 kernel: audit: type=1334 audit(1769563459.779:306): prog-id=88 op=LOAD Jan 28 01:24:19.779000 audit: BPF prog-id=88 op=LOAD Jan 28 01:24:19.779000 audit: BPF prog-id=82 op=UNLOAD Jan 28 01:24:19.785377 kernel: audit: type=1334 audit(1769563459.779:307): prog-id=82 op=UNLOAD Jan 28 01:24:19.785432 kernel: audit: type=1334 audit(1769563459.779:308): prog-id=83 op=UNLOAD Jan 28 01:24:19.779000 audit: BPF prog-id=83 op=UNLOAD Jan 28 01:24:19.786588 kernel: audit: type=1334 audit(1769563459.780:309): prog-id=89 op=LOAD Jan 28 01:24:19.780000 audit: BPF prog-id=89 op=LOAD Jan 28 01:24:19.787796 kernel: audit: type=1334 audit(1769563459.780:310): prog-id=79 op=UNLOAD Jan 28 01:24:19.780000 audit: BPF prog-id=79 op=UNLOAD Jan 28 01:24:19.780000 audit: BPF prog-id=90 op=LOAD Jan 28 01:24:19.790328 kernel: audit: type=1334 audit(1769563459.780:311): prog-id=90 op=LOAD Jan 28 01:24:19.790371 kernel: audit: type=1334 audit(1769563459.780:312): prog-id=91 op=LOAD Jan 28 01:24:19.780000 audit: BPF prog-id=91 op=LOAD Jan 28 01:24:19.780000 audit: BPF prog-id=80 op=UNLOAD Jan 28 01:24:19.780000 audit: BPF prog-id=81 op=UNLOAD Jan 28 01:24:19.784000 audit: BPF prog-id=92 op=LOAD Jan 28 01:24:19.784000 audit: BPF prog-id=72 op=UNLOAD Jan 28 01:24:19.784000 audit: BPF prog-id=93 op=LOAD Jan 28 01:24:19.784000 audit: BPF prog-id=94 op=LOAD Jan 28 01:24:19.784000 audit: BPF prog-id=73 op=UNLOAD Jan 28 01:24:19.784000 audit: BPF prog-id=74 op=UNLOAD Jan 28 01:24:19.788000 audit: BPF prog-id=95 op=LOAD Jan 28 01:24:19.788000 audit: BPF prog-id=78 op=UNLOAD Jan 28 01:24:19.788000 audit: BPF prog-id=96 op=LOAD Jan 28 01:24:19.788000 audit: BPF prog-id=67 op=UNLOAD Jan 28 01:24:19.789000 audit: BPF prog-id=97 op=LOAD Jan 28 01:24:19.789000 audit: BPF prog-id=98 op=LOAD Jan 28 01:24:19.789000 audit: BPF prog-id=68 op=UNLOAD Jan 28 01:24:19.789000 audit: BPF prog-id=69 op=UNLOAD Jan 28 01:24:19.807000 audit: BPF prog-id=99 op=LOAD Jan 28 01:24:19.807000 audit: BPF prog-id=84 op=UNLOAD Jan 28 01:24:19.808000 audit: BPF prog-id=100 op=LOAD Jan 28 01:24:19.808000 audit: BPF prog-id=101 op=LOAD Jan 28 01:24:19.808000 audit: BPF prog-id=85 op=UNLOAD Jan 28 01:24:19.808000 audit: BPF prog-id=86 op=UNLOAD Jan 28 01:24:19.808000 audit: BPF prog-id=102 op=LOAD Jan 28 01:24:19.808000 audit: BPF prog-id=71 op=UNLOAD Jan 28 01:24:19.809000 audit: BPF prog-id=103 op=LOAD Jan 28 01:24:19.809000 audit: BPF prog-id=70 op=UNLOAD Jan 28 01:24:19.810000 audit: BPF prog-id=104 op=LOAD Jan 28 01:24:19.810000 audit: BPF prog-id=75 op=UNLOAD Jan 28 01:24:19.810000 audit: BPF prog-id=105 op=LOAD Jan 28 01:24:19.810000 audit: BPF prog-id=106 op=LOAD Jan 28 01:24:19.810000 audit: BPF prog-id=76 op=UNLOAD Jan 28 01:24:19.810000 audit: BPF prog-id=77 op=UNLOAD Jan 28 01:24:20.264010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:20.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:20.270084 (kubelet)[3609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:24:20.304554 kubelet[3609]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:24:20.304554 kubelet[3609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:24:20.304771 kubelet[3609]: I0128 01:24:20.304597 3609 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:24:20.584980 kubelet[3609]: I0128 01:24:20.584845 3609 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 01:24:20.584980 kubelet[3609]: I0128 01:24:20.584883 3609 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:24:20.585566 kubelet[3609]: I0128 01:24:20.585549 3609 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 01:24:20.585566 kubelet[3609]: I0128 01:24:20.585565 3609 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:24:20.585771 kubelet[3609]: I0128 01:24:20.585760 3609 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:24:20.594881 kubelet[3609]: E0128 01:24:20.593796 3609 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:24:20.595547 kubelet[3609]: I0128 01:24:20.595529 3609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:24:20.598484 kubelet[3609]: I0128 01:24:20.598468 3609 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:24:20.600902 kubelet[3609]: I0128 01:24:20.600873 3609 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 01:24:20.601062 kubelet[3609]: I0128 01:24:20.601041 3609 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:24:20.601189 kubelet[3609]: I0128 01:24:20.601060 3609 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4593.0.0-n-2270f1152e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:24:20.601291 kubelet[3609]: I0128 01:24:20.601191 3609 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:24:20.601291 kubelet[3609]: I0128 01:24:20.601199 3609 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 01:24:20.601291 kubelet[3609]: I0128 01:24:20.601267 3609 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 01:24:20.605807 kubelet[3609]: I0128 01:24:20.605788 3609 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:24:20.605968 kubelet[3609]: I0128 01:24:20.605943 3609 kubelet.go:475] "Attempting to sync node with API server" Jan 28 01:24:20.605968 kubelet[3609]: I0128 01:24:20.605968 3609 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:24:20.606032 kubelet[3609]: I0128 01:24:20.605990 3609 kubelet.go:387] "Adding apiserver pod source" Jan 28 01:24:20.606032 kubelet[3609]: I0128 01:24:20.606014 3609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:24:20.609936 kubelet[3609]: E0128 01:24:20.609580 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:24:20.609936 kubelet[3609]: E0128 01:24:20.609673 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4593.0.0-n-2270f1152e&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:24:20.610134 kubelet[3609]: I0128 01:24:20.610122 3609 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:24:20.610591 kubelet[3609]: I0128 01:24:20.610578 3609 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:24:20.610653 kubelet[3609]: I0128 01:24:20.610647 3609 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 01:24:20.610720 kubelet[3609]: W0128 01:24:20.610715 3609 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:24:20.613894 kubelet[3609]: I0128 01:24:20.613883 3609 server.go:1262] "Started kubelet" Jan 28 01:24:20.614497 kubelet[3609]: I0128 01:24:20.614484 3609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:24:20.618426 kubelet[3609]: E0128 01:24:20.617005 3609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4593.0.0-n-2270f1152e.188ec09785660e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4593.0.0-n-2270f1152e,UID:ci-4593.0.0-n-2270f1152e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4593.0.0-n-2270f1152e,},FirstTimestamp:2026-01-28 01:24:20.613836384 +0000 UTC m=+0.340101149,LastTimestamp:2026-01-28 01:24:20.613836384 +0000 UTC m=+0.340101149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4593.0.0-n-2270f1152e,}" Jan 28 01:24:20.618000 audit[3623]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.618000 audit[3623]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe1ee9d560 a2=0 a3=0 items=0 ppid=3609 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.619724 kubelet[3609]: E0128 01:24:20.619676 3609 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:24:20.618000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:24:20.619918 kubelet[3609]: I0128 01:24:20.619812 3609 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:24:20.619000 audit[3624]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3624 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.619000 audit[3624]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc28e28720 a2=0 a3=0 items=0 ppid=3609 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:24:20.621136 kubelet[3609]: I0128 01:24:20.621122 3609 server.go:310] "Adding debug handlers to kubelet server" Jan 28 01:24:20.622114 kubelet[3609]: I0128 01:24:20.622101 3609 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 01:24:20.622345 kubelet[3609]: E0128 01:24:20.622333 3609 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4593.0.0-n-2270f1152e\" not found" Jan 28 01:24:20.622000 audit[3627]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.622000 audit[3627]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc43c28480 a2=0 a3=0 items=0 ppid=3609 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:24:20.623793 kubelet[3609]: I0128 01:24:20.623770 3609 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:24:20.623833 kubelet[3609]: I0128 01:24:20.623807 3609 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 01:24:20.623949 kubelet[3609]: I0128 01:24:20.623938 3609 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:24:20.624163 kubelet[3609]: I0128 01:24:20.624149 3609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:24:20.624815 kubelet[3609]: I0128 01:24:20.624803 3609 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 01:24:20.624925 kubelet[3609]: I0128 01:24:20.624919 3609 reconciler.go:29] "Reconciler: start to sync state" Jan 28 01:24:20.624000 audit[3630]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.624000 audit[3630]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe438cc40 a2=0 a3=0 items=0 ppid=3609 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.624000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:24:20.626364 kubelet[3609]: E0128 01:24:20.626333 3609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593.0.0-n-2270f1152e?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="200ms" Jan 28 01:24:20.627330 kubelet[3609]: I0128 01:24:20.627317 3609 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:24:20.627952 kubelet[3609]: E0128 01:24:20.627931 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:24:20.628073 kubelet[3609]: I0128 01:24:20.628055 3609 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:24:20.630764 kubelet[3609]: I0128 01:24:20.630747 3609 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:24:20.638613 kubelet[3609]: I0128 01:24:20.638598 3609 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:24:20.638696 kubelet[3609]: I0128 01:24:20.638683 3609 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:24:20.638696 kubelet[3609]: I0128 01:24:20.638696 3609 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:24:20.642601 kubelet[3609]: I0128 01:24:20.642145 3609 policy_none.go:49] "None policy: Start" Jan 28 01:24:20.642601 kubelet[3609]: I0128 01:24:20.642161 3609 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 01:24:20.642601 kubelet[3609]: I0128 01:24:20.642170 3609 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 01:24:20.643000 audit[3633]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.643000 audit[3633]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe53083bf0 a2=0 a3=0 items=0 ppid=3609 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 28 01:24:20.644958 kubelet[3609]: I0128 01:24:20.644941 3609 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 01:24:20.644000 audit[3635]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:20.644000 audit[3635]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe98a44ce0 a2=0 a3=0 items=0 ppid=3609 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 01:24:20.646134 kubelet[3609]: I0128 01:24:20.646099 3609 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 01:24:20.646134 kubelet[3609]: I0128 01:24:20.646111 3609 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 01:24:20.646184 kubelet[3609]: I0128 01:24:20.646136 3609 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 01:24:20.646184 kubelet[3609]: E0128 01:24:20.646165 3609 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:24:20.645000 audit[3636]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.645000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb1db95d0 a2=0 a3=0 items=0 ppid=3609 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.645000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:24:20.646000 audit[3637]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.647750 kubelet[3609]: I0128 01:24:20.647074 3609 policy_none.go:47] "Start" Jan 28 01:24:20.646000 audit[3637]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc70176110 a2=0 a3=0 items=0 ppid=3609 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.646000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:24:20.647000 audit[3638]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:20.647000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6e501080 a2=0 a3=0 items=0 ppid=3609 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:24:20.648000 audit[3639]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:20.648000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffdfd6e20 a2=0 a3=0 items=0 ppid=3609 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.648000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 01:24:20.650087 kubelet[3609]: E0128 01:24:20.650061 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:24:20.650000 audit[3640]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:20.650000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8f407ef0 a2=0 a3=0 items=0 ppid=3609 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 01:24:20.651000 audit[3641]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:20.651000 audit[3641]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff11ce57a0 a2=0 a3=0 items=0 ppid=3609 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:20.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 01:24:20.653094 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:24:20.666414 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:24:20.668977 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:24:20.679352 kubelet[3609]: E0128 01:24:20.679324 3609 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:24:20.679505 kubelet[3609]: I0128 01:24:20.679441 3609 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:24:20.679505 kubelet[3609]: I0128 01:24:20.679451 3609 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:24:20.680169 kubelet[3609]: I0128 01:24:20.679886 3609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:24:20.680593 kubelet[3609]: E0128 01:24:20.680580 3609 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:24:20.680642 kubelet[3609]: E0128 01:24:20.680611 3609 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4593.0.0-n-2270f1152e\" not found" Jan 28 01:24:20.741060 update_engine[2538]: I20260128 01:24:20.740998 2538 update_attempter.cc:509] Updating boot flags... Jan 28 01:24:20.758396 systemd[1]: Created slice kubepods-burstable-pod5568689c3903bac53e49286f5d9293c4.slice - libcontainer container kubepods-burstable-pod5568689c3903bac53e49286f5d9293c4.slice. Jan 28 01:24:20.773929 kubelet[3609]: E0128 01:24:20.772091 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.779457 systemd[1]: Created slice kubepods-burstable-pod274442c28c75abf81b6ca40deb660e15.slice - libcontainer container kubepods-burstable-pod274442c28c75abf81b6ca40deb660e15.slice. Jan 28 01:24:20.781128 kubelet[3609]: I0128 01:24:20.780968 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.781888 kubelet[3609]: E0128 01:24:20.781842 3609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.789556 kubelet[3609]: E0128 01:24:20.789298 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.798463 systemd[1]: Created slice kubepods-burstable-pod64e89e558489c18b95c913c5cd0fd728.slice - libcontainer container kubepods-burstable-pod64e89e558489c18b95c913c5cd0fd728.slice. Jan 28 01:24:20.810163 kubelet[3609]: E0128 01:24:20.810113 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.827119 kubelet[3609]: E0128 01:24:20.827097 3609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593.0.0-n-2270f1152e?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="400ms" Jan 28 01:24:20.834879 kubelet[3609]: I0128 01:24:20.829009 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64e89e558489c18b95c913c5cd0fd728-kubeconfig\") pod \"kube-scheduler-ci-4593.0.0-n-2270f1152e\" (UID: \"64e89e558489c18b95c913c5cd0fd728\") " pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.834879 kubelet[3609]: I0128 01:24:20.832704 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5568689c3903bac53e49286f5d9293c4-ca-certs\") pod \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" (UID: \"5568689c3903bac53e49286f5d9293c4\") " pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.834879 kubelet[3609]: I0128 01:24:20.832731 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5568689c3903bac53e49286f5d9293c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" (UID: \"5568689c3903bac53e49286f5d9293c4\") " pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.834879 kubelet[3609]: I0128 01:24:20.832748 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-flexvolume-dir\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.834879 kubelet[3609]: I0128 01:24:20.832762 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-k8s-certs\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.835047 kubelet[3609]: I0128 01:24:20.832777 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5568689c3903bac53e49286f5d9293c4-k8s-certs\") pod \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" (UID: \"5568689c3903bac53e49286f5d9293c4\") " pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.835047 kubelet[3609]: I0128 01:24:20.832791 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-ca-certs\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.835047 kubelet[3609]: I0128 01:24:20.832803 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-kubeconfig\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.835047 kubelet[3609]: I0128 01:24:20.832820 3609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.983319 kubelet[3609]: I0128 01:24:20.983302 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:20.983526 kubelet[3609]: E0128 01:24:20.983495 3609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:21.083440 containerd[2559]: time="2026-01-28T01:24:21.083400017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4593.0.0-n-2270f1152e,Uid:5568689c3903bac53e49286f5d9293c4,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:21.096977 containerd[2559]: time="2026-01-28T01:24:21.096893263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4593.0.0-n-2270f1152e,Uid:274442c28c75abf81b6ca40deb660e15,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:21.114739 containerd[2559]: time="2026-01-28T01:24:21.114709077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4593.0.0-n-2270f1152e,Uid:64e89e558489c18b95c913c5cd0fd728,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:21.227837 kubelet[3609]: E0128 01:24:21.227806 3609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593.0.0-n-2270f1152e?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="800ms" Jan 28 01:24:21.265383 kubelet[3609]: E0128 01:24:21.265309 3609 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4593.0.0-n-2270f1152e.188ec09785660e60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4593.0.0-n-2270f1152e,UID:ci-4593.0.0-n-2270f1152e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4593.0.0-n-2270f1152e,},FirstTimestamp:2026-01-28 01:24:20.613836384 +0000 UTC m=+0.340101149,LastTimestamp:2026-01-28 01:24:20.613836384 +0000 UTC m=+0.340101149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4593.0.0-n-2270f1152e,}" Jan 28 01:24:21.385122 kubelet[3609]: I0128 01:24:21.385066 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:21.385590 kubelet[3609]: E0128 01:24:21.385304 3609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:21.642845 kubelet[3609]: E0128 01:24:21.642781 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 01:24:21.677169 kubelet[3609]: E0128 01:24:21.677140 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4593.0.0-n-2270f1152e&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 01:24:21.979437 kubelet[3609]: E0128 01:24:21.979414 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 01:24:22.028962 kubelet[3609]: E0128 01:24:22.028934 3609 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593.0.0-n-2270f1152e?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="1.6s" Jan 28 01:24:22.168385 kubelet[3609]: E0128 01:24:22.122054 3609 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 01:24:22.186564 kubelet[3609]: I0128 01:24:22.186536 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:22.186804 kubelet[3609]: E0128 01:24:22.186779 3609 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:22.481541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053886661.mount: Deactivated successfully. Jan 28 01:24:22.502627 containerd[2559]: time="2026-01-28T01:24:22.502591036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:24:22.511776 containerd[2559]: time="2026-01-28T01:24:22.511613360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=1382" Jan 28 01:24:22.514810 containerd[2559]: time="2026-01-28T01:24:22.514786834Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:24:22.517218 containerd[2559]: time="2026-01-28T01:24:22.517191091Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:24:22.522573 containerd[2559]: time="2026-01-28T01:24:22.522439405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 01:24:22.524881 containerd[2559]: time="2026-01-28T01:24:22.524845997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:24:22.527316 containerd[2559]: time="2026-01-28T01:24:22.527288448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:24:22.527724 containerd[2559]: time="2026-01-28T01:24:22.527702190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.438459978s" Jan 28 01:24:22.529433 containerd[2559]: time="2026-01-28T01:24:22.529411720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 01:24:22.530315 containerd[2559]: time="2026-01-28T01:24:22.530296561Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.426307507s" Jan 28 01:24:22.531111 containerd[2559]: time="2026-01-28T01:24:22.531087519Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.411904542s" Jan 28 01:24:22.589823 containerd[2559]: time="2026-01-28T01:24:22.589783933Z" level=info msg="connecting to shim fc7efd6a8ff4bbf093320463694b552ad788c227d85440aa1efc0f3322b45266" address="unix:///run/containerd/s/b30b7f1946efa68bca39a7f55fd3344a0af237330fb6b0d8b5b7a0ca7c6f4d18" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:24:22.609141 containerd[2559]: time="2026-01-28T01:24:22.608941349Z" level=info msg="connecting to shim ac27c11aab7dfcf9ce1d7d0f638d04f9adca36e4c39ec9d2168095c9f162e325" address="unix:///run/containerd/s/a3f206e02cee6f156f52fe42f2de7a0c23c8b4c420c3be30393ba31238d960c6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:24:22.617286 containerd[2559]: time="2026-01-28T01:24:22.617248458Z" level=info msg="connecting to shim 80bb2c324b543824f0c659136cf2d209cbd2946c1b333ba0861649d27946abaf" address="unix:///run/containerd/s/986bd966fca0e039fa32ab73aede674c553a4dd964237680700cb6a55ed15fbd" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:24:22.623023 systemd[1]: Started cri-containerd-fc7efd6a8ff4bbf093320463694b552ad788c227d85440aa1efc0f3322b45266.scope - libcontainer container fc7efd6a8ff4bbf093320463694b552ad788c227d85440aa1efc0f3322b45266. Jan 28 01:24:22.638211 systemd[1]: Started cri-containerd-ac27c11aab7dfcf9ce1d7d0f638d04f9adca36e4c39ec9d2168095c9f162e325.scope - libcontainer container ac27c11aab7dfcf9ce1d7d0f638d04f9adca36e4c39ec9d2168095c9f162e325. Jan 28 01:24:22.640000 audit: BPF prog-id=107 op=LOAD Jan 28 01:24:22.641000 audit: BPF prog-id=108 op=LOAD Jan 28 01:24:22.641000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3688 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663376566643661386666346262663039333332303436333639346235 Jan 28 01:24:22.642000 audit: BPF prog-id=108 op=UNLOAD Jan 28 01:24:22.642000 audit[3700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3688 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663376566643661386666346262663039333332303436333639346235 Jan 28 01:24:22.642000 audit: BPF prog-id=109 op=LOAD Jan 28 01:24:22.642000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3688 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663376566643661386666346262663039333332303436333639346235 Jan 28 01:24:22.642000 audit: BPF prog-id=110 op=LOAD Jan 28 01:24:22.642000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3688 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663376566643661386666346262663039333332303436333639346235 Jan 28 01:24:22.642000 audit: BPF prog-id=110 op=UNLOAD Jan 28 01:24:22.642000 audit[3700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3688 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663376566643661386666346262663039333332303436333639346235 Jan 28 01:24:22.642000 audit: BPF prog-id=109 op=UNLOAD Jan 28 01:24:22.642000 audit[3700]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3688 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663376566643661386666346262663039333332303436333639346235 Jan 28 01:24:22.642000 audit: BPF prog-id=111 op=LOAD Jan 28 01:24:22.642000 audit[3700]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3688 pid=3700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663376566643661386666346262663039333332303436333639346235 Jan 28 01:24:22.645600 systemd[1]: Started cri-containerd-80bb2c324b543824f0c659136cf2d209cbd2946c1b333ba0861649d27946abaf.scope - libcontainer container 80bb2c324b543824f0c659136cf2d209cbd2946c1b333ba0861649d27946abaf. Jan 28 01:24:22.652000 audit: BPF prog-id=112 op=LOAD Jan 28 01:24:22.653000 audit: BPF prog-id=113 op=LOAD Jan 28 01:24:22.653000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3715 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163323763313161616237646663663963653164376430663633386430 Jan 28 01:24:22.653000 audit: BPF prog-id=113 op=UNLOAD Jan 28 01:24:22.653000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163323763313161616237646663663963653164376430663633386430 Jan 28 01:24:22.653000 audit: BPF prog-id=114 op=LOAD Jan 28 01:24:22.653000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3715 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163323763313161616237646663663963653164376430663633386430 Jan 28 01:24:22.653000 audit: BPF prog-id=115 op=LOAD Jan 28 01:24:22.653000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3715 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163323763313161616237646663663963653164376430663633386430 Jan 28 01:24:22.653000 audit: BPF prog-id=115 op=UNLOAD Jan 28 01:24:22.653000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163323763313161616237646663663963653164376430663633386430 Jan 28 01:24:22.653000 audit: BPF prog-id=114 op=UNLOAD Jan 28 01:24:22.653000 audit[3741]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163323763313161616237646663663963653164376430663633386430 Jan 28 01:24:22.653000 audit: BPF prog-id=116 op=LOAD Jan 28 01:24:22.653000 audit[3741]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3715 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163323763313161616237646663663963653164376430663633386430 Jan 28 01:24:22.659000 audit: BPF prog-id=117 op=LOAD Jan 28 01:24:22.659000 audit: BPF prog-id=118 op=LOAD Jan 28 01:24:22.659000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3742 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830626232633332346235343338323466306336353931333663663264 Jan 28 01:24:22.659000 audit: BPF prog-id=118 op=UNLOAD Jan 28 01:24:22.659000 audit[3762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830626232633332346235343338323466306336353931333663663264 Jan 28 01:24:22.660000 audit: BPF prog-id=119 op=LOAD Jan 28 01:24:22.660000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3742 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830626232633332346235343338323466306336353931333663663264 Jan 28 01:24:22.660000 audit: BPF prog-id=120 op=LOAD Jan 28 01:24:22.660000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3742 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830626232633332346235343338323466306336353931333663663264 Jan 28 01:24:22.660000 audit: BPF prog-id=120 op=UNLOAD Jan 28 01:24:22.660000 audit[3762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830626232633332346235343338323466306336353931333663663264 Jan 28 01:24:22.661000 audit: BPF prog-id=119 op=UNLOAD Jan 28 01:24:22.661000 audit[3762]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830626232633332346235343338323466306336353931333663663264 Jan 28 01:24:22.661000 audit: BPF prog-id=121 op=LOAD Jan 28 01:24:22.661000 audit[3762]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3742 pid=3762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830626232633332346235343338323466306336353931333663663264 Jan 28 01:24:22.723431 containerd[2559]: time="2026-01-28T01:24:22.723410725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4593.0.0-n-2270f1152e,Uid:5568689c3903bac53e49286f5d9293c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc7efd6a8ff4bbf093320463694b552ad788c227d85440aa1efc0f3322b45266\"" Jan 28 01:24:22.725895 containerd[2559]: time="2026-01-28T01:24:22.725873197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4593.0.0-n-2270f1152e,Uid:64e89e558489c18b95c913c5cd0fd728,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac27c11aab7dfcf9ce1d7d0f638d04f9adca36e4c39ec9d2168095c9f162e325\"" Jan 28 01:24:22.729880 containerd[2559]: time="2026-01-28T01:24:22.729795024Z" level=info msg="CreateContainer within sandbox \"fc7efd6a8ff4bbf093320463694b552ad788c227d85440aa1efc0f3322b45266\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:24:22.730784 containerd[2559]: time="2026-01-28T01:24:22.730764238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4593.0.0-n-2270f1152e,Uid:274442c28c75abf81b6ca40deb660e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"80bb2c324b543824f0c659136cf2d209cbd2946c1b333ba0861649d27946abaf\"" Jan 28 01:24:22.733695 containerd[2559]: time="2026-01-28T01:24:22.732993818Z" level=info msg="CreateContainer within sandbox \"ac27c11aab7dfcf9ce1d7d0f638d04f9adca36e4c39ec9d2168095c9f162e325\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:24:22.736826 containerd[2559]: time="2026-01-28T01:24:22.736806868Z" level=info msg="CreateContainer within sandbox \"80bb2c324b543824f0c659136cf2d209cbd2946c1b333ba0861649d27946abaf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:24:22.742843 containerd[2559]: time="2026-01-28T01:24:22.742827202Z" level=info msg="Container c3d7396c101126c0cee2eb46e8716be9a0815afc9f3ae28c692faa5e80bfcf9a: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:22.760761 containerd[2559]: time="2026-01-28T01:24:22.760729406Z" level=info msg="Container d69832b11fd78132b9a1b823b6b2382edb9800655ae33aebaa4653899161643d: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:22.769087 containerd[2559]: time="2026-01-28T01:24:22.769065708Z" level=info msg="Container 40f972b93d5f2bcb9bf4a563a81acfcf577b7285bb506b35fe805b279690abc6: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:22.770882 containerd[2559]: time="2026-01-28T01:24:22.770601547Z" level=info msg="CreateContainer within sandbox \"fc7efd6a8ff4bbf093320463694b552ad788c227d85440aa1efc0f3322b45266\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c3d7396c101126c0cee2eb46e8716be9a0815afc9f3ae28c692faa5e80bfcf9a\"" Jan 28 01:24:22.772587 containerd[2559]: time="2026-01-28T01:24:22.772565011Z" level=info msg="StartContainer for \"c3d7396c101126c0cee2eb46e8716be9a0815afc9f3ae28c692faa5e80bfcf9a\"" Jan 28 01:24:22.774282 containerd[2559]: time="2026-01-28T01:24:22.774246673Z" level=info msg="connecting to shim c3d7396c101126c0cee2eb46e8716be9a0815afc9f3ae28c692faa5e80bfcf9a" address="unix:///run/containerd/s/b30b7f1946efa68bca39a7f55fd3344a0af237330fb6b0d8b5b7a0ca7c6f4d18" protocol=ttrpc version=3 Jan 28 01:24:22.785804 containerd[2559]: time="2026-01-28T01:24:22.785774639Z" level=info msg="CreateContainer within sandbox \"ac27c11aab7dfcf9ce1d7d0f638d04f9adca36e4c39ec9d2168095c9f162e325\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d69832b11fd78132b9a1b823b6b2382edb9800655ae33aebaa4653899161643d\"" Jan 28 01:24:22.786401 containerd[2559]: time="2026-01-28T01:24:22.786368176Z" level=info msg="StartContainer for \"d69832b11fd78132b9a1b823b6b2382edb9800655ae33aebaa4653899161643d\"" Jan 28 01:24:22.787089 containerd[2559]: time="2026-01-28T01:24:22.787062472Z" level=info msg="connecting to shim d69832b11fd78132b9a1b823b6b2382edb9800655ae33aebaa4653899161643d" address="unix:///run/containerd/s/a3f206e02cee6f156f52fe42f2de7a0c23c8b4c420c3be30393ba31238d960c6" protocol=ttrpc version=3 Jan 28 01:24:22.792019 systemd[1]: Started cri-containerd-c3d7396c101126c0cee2eb46e8716be9a0815afc9f3ae28c692faa5e80bfcf9a.scope - libcontainer container c3d7396c101126c0cee2eb46e8716be9a0815afc9f3ae28c692faa5e80bfcf9a. Jan 28 01:24:22.794010 kubelet[3609]: E0128 01:24:22.793986 3609 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 01:24:22.796256 containerd[2559]: time="2026-01-28T01:24:22.796227977Z" level=info msg="CreateContainer within sandbox \"80bb2c324b543824f0c659136cf2d209cbd2946c1b333ba0861649d27946abaf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40f972b93d5f2bcb9bf4a563a81acfcf577b7285bb506b35fe805b279690abc6\"" Jan 28 01:24:22.797167 containerd[2559]: time="2026-01-28T01:24:22.797064085Z" level=info msg="StartContainer for \"40f972b93d5f2bcb9bf4a563a81acfcf577b7285bb506b35fe805b279690abc6\"" Jan 28 01:24:22.800207 containerd[2559]: time="2026-01-28T01:24:22.800106658Z" level=info msg="connecting to shim 40f972b93d5f2bcb9bf4a563a81acfcf577b7285bb506b35fe805b279690abc6" address="unix:///run/containerd/s/986bd966fca0e039fa32ab73aede674c553a4dd964237680700cb6a55ed15fbd" protocol=ttrpc version=3 Jan 28 01:24:22.809182 systemd[1]: Started cri-containerd-d69832b11fd78132b9a1b823b6b2382edb9800655ae33aebaa4653899161643d.scope - libcontainer container d69832b11fd78132b9a1b823b6b2382edb9800655ae33aebaa4653899161643d. Jan 28 01:24:22.812000 audit: BPF prog-id=122 op=LOAD Jan 28 01:24:22.812000 audit: BPF prog-id=123 op=LOAD Jan 28 01:24:22.812000 audit[3822]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3688 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333643733393663313031313236633063656532656234366538373136 Jan 28 01:24:22.812000 audit: BPF prog-id=123 op=UNLOAD Jan 28 01:24:22.812000 audit[3822]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3688 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333643733393663313031313236633063656532656234366538373136 Jan 28 01:24:22.813000 audit: BPF prog-id=124 op=LOAD Jan 28 01:24:22.813000 audit[3822]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3688 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333643733393663313031313236633063656532656234366538373136 Jan 28 01:24:22.813000 audit: BPF prog-id=125 op=LOAD Jan 28 01:24:22.813000 audit[3822]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3688 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333643733393663313031313236633063656532656234366538373136 Jan 28 01:24:22.813000 audit: BPF prog-id=125 op=UNLOAD Jan 28 01:24:22.813000 audit[3822]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3688 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333643733393663313031313236633063656532656234366538373136 Jan 28 01:24:22.813000 audit: BPF prog-id=124 op=UNLOAD Jan 28 01:24:22.813000 audit[3822]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3688 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333643733393663313031313236633063656532656234366538373136 Jan 28 01:24:22.813000 audit: BPF prog-id=126 op=LOAD Jan 28 01:24:22.813000 audit[3822]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3688 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333643733393663313031313236633063656532656234366538373136 Jan 28 01:24:22.820122 systemd[1]: Started cri-containerd-40f972b93d5f2bcb9bf4a563a81acfcf577b7285bb506b35fe805b279690abc6.scope - libcontainer container 40f972b93d5f2bcb9bf4a563a81acfcf577b7285bb506b35fe805b279690abc6. Jan 28 01:24:22.825000 audit: BPF prog-id=127 op=LOAD Jan 28 01:24:22.826000 audit: BPF prog-id=128 op=LOAD Jan 28 01:24:22.826000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3715 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393833326231316664373831333262396131623832336236623233 Jan 28 01:24:22.826000 audit: BPF prog-id=128 op=UNLOAD Jan 28 01:24:22.826000 audit[3834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393833326231316664373831333262396131623832336236623233 Jan 28 01:24:22.826000 audit: BPF prog-id=129 op=LOAD Jan 28 01:24:22.826000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3715 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393833326231316664373831333262396131623832336236623233 Jan 28 01:24:22.826000 audit: BPF prog-id=130 op=LOAD Jan 28 01:24:22.826000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3715 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393833326231316664373831333262396131623832336236623233 Jan 28 01:24:22.826000 audit: BPF prog-id=130 op=UNLOAD Jan 28 01:24:22.826000 audit[3834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393833326231316664373831333262396131623832336236623233 Jan 28 01:24:22.826000 audit: BPF prog-id=129 op=UNLOAD Jan 28 01:24:22.826000 audit[3834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3715 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393833326231316664373831333262396131623832336236623233 Jan 28 01:24:22.826000 audit: BPF prog-id=131 op=LOAD Jan 28 01:24:22.826000 audit[3834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3715 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436393833326231316664373831333262396131623832336236623233 Jan 28 01:24:22.834000 audit: BPF prog-id=132 op=LOAD Jan 28 01:24:22.835000 audit: BPF prog-id=133 op=LOAD Jan 28 01:24:22.835000 audit[3852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3742 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430663937326239336435663262636239626634613536336138316163 Jan 28 01:24:22.835000 audit: BPF prog-id=133 op=UNLOAD Jan 28 01:24:22.835000 audit[3852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430663937326239336435663262636239626634613536336138316163 Jan 28 01:24:22.835000 audit: BPF prog-id=134 op=LOAD Jan 28 01:24:22.835000 audit[3852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3742 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430663937326239336435663262636239626634613536336138316163 Jan 28 01:24:22.835000 audit: BPF prog-id=135 op=LOAD Jan 28 01:24:22.835000 audit[3852]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3742 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430663937326239336435663262636239626634613536336138316163 Jan 28 01:24:22.835000 audit: BPF prog-id=135 op=UNLOAD Jan 28 01:24:22.835000 audit[3852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430663937326239336435663262636239626634613536336138316163 Jan 28 01:24:22.835000 audit: BPF prog-id=134 op=UNLOAD Jan 28 01:24:22.835000 audit[3852]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3742 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430663937326239336435663262636239626634613536336138316163 Jan 28 01:24:22.835000 audit: BPF prog-id=136 op=LOAD Jan 28 01:24:22.835000 audit[3852]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3742 pid=3852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:22.835000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430663937326239336435663262636239626634613536336138316163 Jan 28 01:24:22.870951 containerd[2559]: time="2026-01-28T01:24:22.870715798Z" level=info msg="StartContainer for \"c3d7396c101126c0cee2eb46e8716be9a0815afc9f3ae28c692faa5e80bfcf9a\" returns successfully" Jan 28 01:24:22.878139 containerd[2559]: time="2026-01-28T01:24:22.878121444Z" level=info msg="StartContainer for \"d69832b11fd78132b9a1b823b6b2382edb9800655ae33aebaa4653899161643d\" returns successfully" Jan 28 01:24:22.899263 containerd[2559]: time="2026-01-28T01:24:22.899220023Z" level=info msg="StartContainer for \"40f972b93d5f2bcb9bf4a563a81acfcf577b7285bb506b35fe805b279690abc6\" returns successfully" Jan 28 01:24:23.660730 kubelet[3609]: E0128 01:24:23.660310 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:23.665340 kubelet[3609]: E0128 01:24:23.665195 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:23.666416 kubelet[3609]: E0128 01:24:23.666405 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:23.788340 kubelet[3609]: I0128 01:24:23.788328 3609 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:24.669240 kubelet[3609]: E0128 01:24:24.669019 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:24.670169 kubelet[3609]: E0128 01:24:24.670150 3609 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:24.865936 kubelet[3609]: E0128 01:24:24.865903 3609 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4593.0.0-n-2270f1152e\" not found" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.017850 kubelet[3609]: I0128 01:24:25.017780 3609 kubelet_node_status.go:78] "Successfully registered node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.023031 kubelet[3609]: I0128 01:24:25.023008 3609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.035793 kubelet[3609]: E0128 01:24:25.035663 3609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.035793 kubelet[3609]: I0128 01:24:25.035682 3609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.037588 kubelet[3609]: E0128 01:24:25.037484 3609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4593.0.0-n-2270f1152e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.037588 kubelet[3609]: I0128 01:24:25.037514 3609 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.039018 kubelet[3609]: E0128 01:24:25.038988 3609 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:25.611120 kubelet[3609]: I0128 01:24:25.611092 3609 apiserver.go:52] "Watching apiserver" Jan 28 01:24:25.625838 kubelet[3609]: I0128 01:24:25.625817 3609 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 01:24:27.223508 systemd[1]: Reload requested from client PID 3923 ('systemctl') (unit session-10.scope)... Jan 28 01:24:27.223522 systemd[1]: Reloading... Jan 28 01:24:27.306904 zram_generator::config[3969]: No configuration found. Jan 28 01:24:27.489437 systemd[1]: Reloading finished in 265 ms. Jan 28 01:24:27.508986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:27.527596 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:24:27.527817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:27.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:27.530441 kernel: kauditd_printk_skb: 201 callbacks suppressed Jan 28 01:24:27.530507 kernel: audit: type=1131 audit(1769563467.526:406): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:27.530919 systemd[1]: kubelet.service: Consumed 626ms CPU time, 124.8M memory peak. Jan 28 01:24:27.532617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:24:27.532000 audit: BPF prog-id=137 op=LOAD Jan 28 01:24:27.537319 kernel: audit: type=1334 audit(1769563467.532:407): prog-id=137 op=LOAD Jan 28 01:24:27.537363 kernel: audit: type=1334 audit(1769563467.532:408): prog-id=103 op=UNLOAD Jan 28 01:24:27.532000 audit: BPF prog-id=103 op=UNLOAD Jan 28 01:24:27.540423 kernel: audit: type=1334 audit(1769563467.533:409): prog-id=138 op=LOAD Jan 28 01:24:27.540475 kernel: audit: type=1334 audit(1769563467.533:410): prog-id=96 op=UNLOAD Jan 28 01:24:27.533000 audit: BPF prog-id=138 op=LOAD Jan 28 01:24:27.533000 audit: BPF prog-id=96 op=UNLOAD Jan 28 01:24:27.544819 kernel: audit: type=1334 audit(1769563467.533:411): prog-id=139 op=LOAD Jan 28 01:24:27.544877 kernel: audit: type=1334 audit(1769563467.533:412): prog-id=140 op=LOAD Jan 28 01:24:27.533000 audit: BPF prog-id=139 op=LOAD Jan 28 01:24:27.533000 audit: BPF prog-id=140 op=LOAD Jan 28 01:24:27.533000 audit: BPF prog-id=97 op=UNLOAD Jan 28 01:24:27.533000 audit: BPF prog-id=98 op=UNLOAD Jan 28 01:24:27.546999 kernel: audit: type=1334 audit(1769563467.533:413): prog-id=97 op=UNLOAD Jan 28 01:24:27.547037 kernel: audit: type=1334 audit(1769563467.533:414): prog-id=98 op=UNLOAD Jan 28 01:24:27.548221 kernel: audit: type=1334 audit(1769563467.535:415): prog-id=141 op=LOAD Jan 28 01:24:27.535000 audit: BPF prog-id=141 op=LOAD Jan 28 01:24:27.536000 audit: BPF prog-id=89 op=UNLOAD Jan 28 01:24:27.536000 audit: BPF prog-id=142 op=LOAD Jan 28 01:24:27.536000 audit: BPF prog-id=143 op=LOAD Jan 28 01:24:27.536000 audit: BPF prog-id=90 op=UNLOAD Jan 28 01:24:27.536000 audit: BPF prog-id=91 op=UNLOAD Jan 28 01:24:27.537000 audit: BPF prog-id=144 op=LOAD Jan 28 01:24:27.537000 audit: BPF prog-id=99 op=UNLOAD Jan 28 01:24:27.538000 audit: BPF prog-id=145 op=LOAD Jan 28 01:24:27.538000 audit: BPF prog-id=146 op=LOAD Jan 28 01:24:27.538000 audit: BPF prog-id=100 op=UNLOAD Jan 28 01:24:27.538000 audit: BPF prog-id=101 op=UNLOAD Jan 28 01:24:27.539000 audit: BPF prog-id=147 op=LOAD Jan 28 01:24:27.539000 audit: BPF prog-id=102 op=UNLOAD Jan 28 01:24:27.541000 audit: BPF prog-id=148 op=LOAD Jan 28 01:24:27.541000 audit: BPF prog-id=95 op=UNLOAD Jan 28 01:24:27.543000 audit: BPF prog-id=149 op=LOAD Jan 28 01:24:27.543000 audit: BPF prog-id=104 op=UNLOAD Jan 28 01:24:27.543000 audit: BPF prog-id=150 op=LOAD Jan 28 01:24:27.543000 audit: BPF prog-id=151 op=LOAD Jan 28 01:24:27.543000 audit: BPF prog-id=105 op=UNLOAD Jan 28 01:24:27.543000 audit: BPF prog-id=106 op=UNLOAD Jan 28 01:24:27.544000 audit: BPF prog-id=152 op=LOAD Jan 28 01:24:27.548000 audit: BPF prog-id=92 op=UNLOAD Jan 28 01:24:27.548000 audit: BPF prog-id=153 op=LOAD Jan 28 01:24:27.548000 audit: BPF prog-id=154 op=LOAD Jan 28 01:24:27.548000 audit: BPF prog-id=93 op=UNLOAD Jan 28 01:24:27.548000 audit: BPF prog-id=94 op=UNLOAD Jan 28 01:24:27.548000 audit: BPF prog-id=155 op=LOAD Jan 28 01:24:27.548000 audit: BPF prog-id=156 op=LOAD Jan 28 01:24:27.548000 audit: BPF prog-id=87 op=UNLOAD Jan 28 01:24:27.548000 audit: BPF prog-id=88 op=UNLOAD Jan 28 01:24:28.544906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:24:28.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:28.554093 (kubelet)[4040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:24:28.590251 kubelet[4040]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:24:28.590251 kubelet[4040]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:24:28.590484 kubelet[4040]: I0128 01:24:28.590298 4040 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:24:28.595427 kubelet[4040]: I0128 01:24:28.595406 4040 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 01:24:28.595427 kubelet[4040]: I0128 01:24:28.595426 4040 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:24:28.595526 kubelet[4040]: I0128 01:24:28.595446 4040 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 01:24:28.595526 kubelet[4040]: I0128 01:24:28.595453 4040 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:24:28.595740 kubelet[4040]: I0128 01:24:28.595731 4040 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 01:24:28.597944 kubelet[4040]: I0128 01:24:28.597911 4040 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 01:24:28.600025 kubelet[4040]: I0128 01:24:28.599999 4040 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:24:28.604403 kubelet[4040]: I0128 01:24:28.604358 4040 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:24:28.606913 kubelet[4040]: I0128 01:24:28.606887 4040 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 01:24:28.607048 kubelet[4040]: I0128 01:24:28.607022 4040 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:24:28.607171 kubelet[4040]: I0128 01:24:28.607050 4040 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4593.0.0-n-2270f1152e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:24:28.607266 kubelet[4040]: I0128 01:24:28.607177 4040 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:24:28.607266 kubelet[4040]: I0128 01:24:28.607185 4040 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 01:24:28.607266 kubelet[4040]: I0128 01:24:28.607204 4040 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 01:24:28.608067 kubelet[4040]: I0128 01:24:28.608052 4040 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:24:28.608185 kubelet[4040]: I0128 01:24:28.608173 4040 kubelet.go:475] "Attempting to sync node with API server" Jan 28 01:24:28.608210 kubelet[4040]: I0128 01:24:28.608186 4040 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:24:28.608210 kubelet[4040]: I0128 01:24:28.608204 4040 kubelet.go:387] "Adding apiserver pod source" Jan 28 01:24:28.608956 kubelet[4040]: I0128 01:24:28.608215 4040 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:24:28.611339 kubelet[4040]: I0128 01:24:28.611304 4040 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 01:24:28.614118 kubelet[4040]: I0128 01:24:28.614093 4040 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 01:24:28.614974 kubelet[4040]: I0128 01:24:28.614957 4040 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 01:24:28.621802 kubelet[4040]: I0128 01:24:28.621679 4040 server.go:1262] "Started kubelet" Jan 28 01:24:28.622665 kubelet[4040]: I0128 01:24:28.622647 4040 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:24:28.629307 kubelet[4040]: I0128 01:24:28.629268 4040 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:24:28.631533 kubelet[4040]: I0128 01:24:28.631511 4040 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 01:24:28.631710 kubelet[4040]: E0128 01:24:28.631697 4040 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4593.0.0-n-2270f1152e\" not found" Jan 28 01:24:28.631901 kubelet[4040]: I0128 01:24:28.631889 4040 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 01:24:28.631986 kubelet[4040]: I0128 01:24:28.631978 4040 reconciler.go:29] "Reconciler: start to sync state" Jan 28 01:24:28.635637 kubelet[4040]: I0128 01:24:28.635621 4040 server.go:310] "Adding debug handlers to kubelet server" Jan 28 01:24:28.643403 kubelet[4040]: I0128 01:24:28.643367 4040 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:24:28.643882 kubelet[4040]: I0128 01:24:28.643494 4040 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 01:24:28.643882 kubelet[4040]: I0128 01:24:28.643611 4040 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:24:28.643882 kubelet[4040]: I0128 01:24:28.643772 4040 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:24:28.646340 kubelet[4040]: I0128 01:24:28.646325 4040 factory.go:223] Registration of the systemd container factory successfully Jan 28 01:24:28.646471 kubelet[4040]: I0128 01:24:28.646459 4040 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:24:28.648784 kubelet[4040]: I0128 01:24:28.648755 4040 factory.go:223] Registration of the containerd container factory successfully Jan 28 01:24:28.649602 kubelet[4040]: I0128 01:24:28.649578 4040 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 01:24:28.656689 kubelet[4040]: I0128 01:24:28.656122 4040 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 01:24:28.656689 kubelet[4040]: I0128 01:24:28.656138 4040 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 01:24:28.656689 kubelet[4040]: I0128 01:24:28.656153 4040 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 01:24:28.656689 kubelet[4040]: E0128 01:24:28.656182 4040 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:24:28.700477 kubelet[4040]: I0128 01:24:28.700440 4040 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:24:28.700477 kubelet[4040]: I0128 01:24:28.700452 4040 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:24:28.700477 kubelet[4040]: I0128 01:24:28.700467 4040 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:24:28.701154 kubelet[4040]: I0128 01:24:28.700583 4040 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:24:28.701154 kubelet[4040]: I0128 01:24:28.700593 4040 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:24:28.701154 kubelet[4040]: I0128 01:24:28.700609 4040 policy_none.go:49] "None policy: Start" Jan 28 01:24:28.701154 kubelet[4040]: I0128 01:24:28.700621 4040 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 01:24:28.701154 kubelet[4040]: I0128 01:24:28.700631 4040 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 01:24:28.701154 kubelet[4040]: I0128 01:24:28.700747 4040 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 28 01:24:28.701154 kubelet[4040]: I0128 01:24:28.700754 4040 policy_none.go:47] "Start" Jan 28 01:24:28.704589 kubelet[4040]: E0128 01:24:28.704578 4040 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 01:24:28.705003 kubelet[4040]: I0128 01:24:28.704955 4040 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:24:28.705082 kubelet[4040]: I0128 01:24:28.705062 4040 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:24:28.705385 kubelet[4040]: I0128 01:24:28.705372 4040 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:24:28.707116 kubelet[4040]: E0128 01:24:28.707061 4040 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:24:28.756654 kubelet[4040]: I0128 01:24:28.756629 4040 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.757324 kubelet[4040]: I0128 01:24:28.757312 4040 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.757551 kubelet[4040]: I0128 01:24:28.757498 4040 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.763990 kubelet[4040]: I0128 01:24:28.763968 4040 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:24:28.767162 kubelet[4040]: I0128 01:24:28.767145 4040 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:24:28.768214 kubelet[4040]: I0128 01:24:28.767917 4040 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:24:28.807987 kubelet[4040]: I0128 01:24:28.807393 4040 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.816885 kubelet[4040]: I0128 01:24:28.816867 4040 kubelet_node_status.go:124] "Node was previously registered" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.817767 kubelet[4040]: I0128 01:24:28.816926 4040 kubelet_node_status.go:78] "Successfully registered node" node="ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933551 kubelet[4040]: I0128 01:24:28.933370 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-k8s-certs\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933551 kubelet[4040]: I0128 01:24:28.933397 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-kubeconfig\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933551 kubelet[4040]: I0128 01:24:28.933415 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933551 kubelet[4040]: I0128 01:24:28.933432 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64e89e558489c18b95c913c5cd0fd728-kubeconfig\") pod \"kube-scheduler-ci-4593.0.0-n-2270f1152e\" (UID: \"64e89e558489c18b95c913c5cd0fd728\") " pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933551 kubelet[4040]: I0128 01:24:28.933446 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5568689c3903bac53e49286f5d9293c4-ca-certs\") pod \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" (UID: \"5568689c3903bac53e49286f5d9293c4\") " pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933678 kubelet[4040]: I0128 01:24:28.933459 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5568689c3903bac53e49286f5d9293c4-k8s-certs\") pod \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" (UID: \"5568689c3903bac53e49286f5d9293c4\") " pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933678 kubelet[4040]: I0128 01:24:28.933475 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5568689c3903bac53e49286f5d9293c4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" (UID: \"5568689c3903bac53e49286f5d9293c4\") " pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933678 kubelet[4040]: I0128 01:24:28.933489 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-flexvolume-dir\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:28.933678 kubelet[4040]: I0128 01:24:28.933504 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/274442c28c75abf81b6ca40deb660e15-ca-certs\") pod \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" (UID: \"274442c28c75abf81b6ca40deb660e15\") " pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:29.609293 kubelet[4040]: I0128 01:24:29.609263 4040 apiserver.go:52] "Watching apiserver" Jan 28 01:24:29.632500 kubelet[4040]: I0128 01:24:29.632471 4040 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 01:24:29.685392 kubelet[4040]: I0128 01:24:29.685216 4040 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:29.685392 kubelet[4040]: I0128 01:24:29.685227 4040 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:29.685912 kubelet[4040]: I0128 01:24:29.685900 4040 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:29.694659 kubelet[4040]: I0128 01:24:29.694637 4040 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:24:29.695031 kubelet[4040]: E0128 01:24:29.694777 4040 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4593.0.0-n-2270f1152e\" already exists" pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:29.695979 kubelet[4040]: I0128 01:24:29.695965 4040 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:24:29.696138 kubelet[4040]: E0128 01:24:29.696047 4040 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4593.0.0-n-2270f1152e\" already exists" pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:29.696301 kubelet[4040]: I0128 01:24:29.696295 4040 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 01:24:29.696408 kubelet[4040]: E0128 01:24:29.696379 4040 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4593.0.0-n-2270f1152e\" already exists" pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" Jan 28 01:24:29.704913 kubelet[4040]: I0128 01:24:29.704872 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4593.0.0-n-2270f1152e" podStartSLOduration=1.7048475810000001 podStartE2EDuration="1.704847581s" podCreationTimestamp="2026-01-28 01:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:24:29.704449103 +0000 UTC m=+1.147540990" watchObservedRunningTime="2026-01-28 01:24:29.704847581 +0000 UTC m=+1.147939466" Jan 28 01:24:29.726110 kubelet[4040]: I0128 01:24:29.726036 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4593.0.0-n-2270f1152e" podStartSLOduration=1.726024719 podStartE2EDuration="1.726024719s" podCreationTimestamp="2026-01-28 01:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:24:29.718004186 +0000 UTC m=+1.161096092" watchObservedRunningTime="2026-01-28 01:24:29.726024719 +0000 UTC m=+1.169116597" Jan 28 01:24:33.490247 kubelet[4040]: I0128 01:24:33.490195 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4593.0.0-n-2270f1152e" podStartSLOduration=5.490181582 podStartE2EDuration="5.490181582s" podCreationTimestamp="2026-01-28 01:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:24:29.726276572 +0000 UTC m=+1.169368453" watchObservedRunningTime="2026-01-28 01:24:33.490181582 +0000 UTC m=+4.933273465" Jan 28 01:24:34.129805 kubelet[4040]: I0128 01:24:34.129254 4040 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:24:34.129805 kubelet[4040]: I0128 01:24:34.129692 4040 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:24:34.129988 containerd[2559]: time="2026-01-28T01:24:34.129532302Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:24:34.135548 systemd[1]: Created slice kubepods-besteffort-pod30203787_2af6_4e48_a9fb_b0d332bd1a32.slice - libcontainer container kubepods-besteffort-pod30203787_2af6_4e48_a9fb_b0d332bd1a32.slice. Jan 28 01:24:34.165994 kubelet[4040]: I0128 01:24:34.165961 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30203787-2af6-4e48-a9fb-b0d332bd1a32-kube-proxy\") pod \"kube-proxy-4bcxq\" (UID: \"30203787-2af6-4e48-a9fb-b0d332bd1a32\") " pod="kube-system/kube-proxy-4bcxq" Jan 28 01:24:34.166095 kubelet[4040]: I0128 01:24:34.166004 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30203787-2af6-4e48-a9fb-b0d332bd1a32-xtables-lock\") pod \"kube-proxy-4bcxq\" (UID: \"30203787-2af6-4e48-a9fb-b0d332bd1a32\") " pod="kube-system/kube-proxy-4bcxq" Jan 28 01:24:34.166095 kubelet[4040]: I0128 01:24:34.166020 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30203787-2af6-4e48-a9fb-b0d332bd1a32-lib-modules\") pod \"kube-proxy-4bcxq\" (UID: \"30203787-2af6-4e48-a9fb-b0d332bd1a32\") " pod="kube-system/kube-proxy-4bcxq" Jan 28 01:24:34.166095 kubelet[4040]: I0128 01:24:34.166036 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bjc\" (UniqueName: \"kubernetes.io/projected/30203787-2af6-4e48-a9fb-b0d332bd1a32-kube-api-access-q8bjc\") pod \"kube-proxy-4bcxq\" (UID: \"30203787-2af6-4e48-a9fb-b0d332bd1a32\") " pod="kube-system/kube-proxy-4bcxq" Jan 28 01:24:34.270391 kubelet[4040]: E0128 01:24:34.270359 4040 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 28 01:24:34.270391 kubelet[4040]: E0128 01:24:34.270382 4040 projected.go:196] Error preparing data for projected volume kube-api-access-q8bjc for pod kube-system/kube-proxy-4bcxq: configmap "kube-root-ca.crt" not found Jan 28 01:24:34.270496 kubelet[4040]: E0128 01:24:34.270452 4040 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30203787-2af6-4e48-a9fb-b0d332bd1a32-kube-api-access-q8bjc podName:30203787-2af6-4e48-a9fb-b0d332bd1a32 nodeName:}" failed. No retries permitted until 2026-01-28 01:24:34.770432106 +0000 UTC m=+6.213523988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q8bjc" (UniqueName: "kubernetes.io/projected/30203787-2af6-4e48-a9fb-b0d332bd1a32-kube-api-access-q8bjc") pod "kube-proxy-4bcxq" (UID: "30203787-2af6-4e48-a9fb-b0d332bd1a32") : configmap "kube-root-ca.crt" not found Jan 28 01:24:34.771392 kubelet[4040]: E0128 01:24:34.771369 4040 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 28 01:24:34.771392 kubelet[4040]: E0128 01:24:34.771391 4040 projected.go:196] Error preparing data for projected volume kube-api-access-q8bjc for pod kube-system/kube-proxy-4bcxq: configmap "kube-root-ca.crt" not found Jan 28 01:24:34.771697 kubelet[4040]: E0128 01:24:34.771433 4040 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30203787-2af6-4e48-a9fb-b0d332bd1a32-kube-api-access-q8bjc podName:30203787-2af6-4e48-a9fb-b0d332bd1a32 nodeName:}" failed. No retries permitted until 2026-01-28 01:24:35.771418598 +0000 UTC m=+7.214510470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-q8bjc" (UniqueName: "kubernetes.io/projected/30203787-2af6-4e48-a9fb-b0d332bd1a32-kube-api-access-q8bjc") pod "kube-proxy-4bcxq" (UID: "30203787-2af6-4e48-a9fb-b0d332bd1a32") : configmap "kube-root-ca.crt" not found Jan 28 01:24:35.334258 systemd[1]: Created slice kubepods-besteffort-pod3c1919ee_4bba_4af8_a87f_9cd5da4b63e0.slice - libcontainer container kubepods-besteffort-pod3c1919ee_4bba_4af8_a87f_9cd5da4b63e0.slice. Jan 28 01:24:35.374967 kubelet[4040]: I0128 01:24:35.374890 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c1919ee-4bba-4af8-a87f-9cd5da4b63e0-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-vtx6m\" (UID: \"3c1919ee-4bba-4af8-a87f-9cd5da4b63e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vtx6m" Jan 28 01:24:35.374967 kubelet[4040]: I0128 01:24:35.374952 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8v65\" (UniqueName: \"kubernetes.io/projected/3c1919ee-4bba-4af8-a87f-9cd5da4b63e0-kube-api-access-j8v65\") pod \"tigera-operator-65cdcdfd6d-vtx6m\" (UID: \"3c1919ee-4bba-4af8-a87f-9cd5da4b63e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vtx6m" Jan 28 01:24:35.644196 containerd[2559]: time="2026-01-28T01:24:35.644115347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vtx6m,Uid:3c1919ee-4bba-4af8-a87f-9cd5da4b63e0,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:24:35.685200 containerd[2559]: time="2026-01-28T01:24:35.685168191Z" level=info msg="connecting to shim 291fb9ba9e7050d014c9fd404d398e47cd82fbe276238aace693470789704ff7" address="unix:///run/containerd/s/f13f52b7026497d679031292b00aa5280c4121f531f9b1c36d631a7e0a1f3ea8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:24:35.710035 systemd[1]: Started cri-containerd-291fb9ba9e7050d014c9fd404d398e47cd82fbe276238aace693470789704ff7.scope - libcontainer container 291fb9ba9e7050d014c9fd404d398e47cd82fbe276238aace693470789704ff7. Jan 28 01:24:35.719659 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 01:24:35.719735 kernel: audit: type=1334 audit(1769563475.715:448): prog-id=157 op=LOAD Jan 28 01:24:35.715000 audit: BPF prog-id=157 op=LOAD Jan 28 01:24:35.721157 kernel: audit: type=1334 audit(1769563475.716:449): prog-id=158 op=LOAD Jan 28 01:24:35.716000 audit: BPF prog-id=158 op=LOAD Jan 28 01:24:35.716000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.723679 kernel: audit: type=1300 audit(1769563475.716:449): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.728919 kernel: audit: type=1327 audit(1769563475.716:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.730325 kernel: audit: type=1334 audit(1769563475.716:450): prog-id=158 op=UNLOAD Jan 28 01:24:35.716000 audit: BPF prog-id=158 op=UNLOAD Jan 28 01:24:35.734114 kernel: audit: type=1300 audit(1769563475.716:450): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.716000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.738236 kernel: audit: type=1327 audit(1769563475.716:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.741615 kernel: audit: type=1334 audit(1769563475.716:451): prog-id=159 op=LOAD Jan 28 01:24:35.716000 audit: BPF prog-id=159 op=LOAD Jan 28 01:24:35.745553 kernel: audit: type=1300 audit(1769563475.716:451): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.716000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.749500 kernel: audit: type=1327 audit(1769563475.716:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.716000 audit: BPF prog-id=160 op=LOAD Jan 28 01:24:35.716000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.716000 audit: BPF prog-id=160 op=UNLOAD Jan 28 01:24:35.716000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.716000 audit: BPF prog-id=159 op=UNLOAD Jan 28 01:24:35.716000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.716000 audit: BPF prog-id=161 op=LOAD Jan 28 01:24:35.716000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4100 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:35.716000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239316662396261396537303530643031346339666434303464333938 Jan 28 01:24:35.766746 containerd[2559]: time="2026-01-28T01:24:35.766722802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vtx6m,Uid:3c1919ee-4bba-4af8-a87f-9cd5da4b63e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"291fb9ba9e7050d014c9fd404d398e47cd82fbe276238aace693470789704ff7\"" Jan 28 01:24:35.768015 containerd[2559]: time="2026-01-28T01:24:35.767933657Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:24:35.947760 containerd[2559]: time="2026-01-28T01:24:35.947697252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bcxq,Uid:30203787-2af6-4e48-a9fb-b0d332bd1a32,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:35.986154 containerd[2559]: time="2026-01-28T01:24:35.986123323Z" level=info msg="connecting to shim 951e380e1686980b1aabb838578c2ebb9362d017c0d6865e5e761c74f3892550" address="unix:///run/containerd/s/7035ef20d86c421a3e906451e17ed133cc50875c0674eb1259a572618f3c1e44" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:24:36.003095 systemd[1]: Started cri-containerd-951e380e1686980b1aabb838578c2ebb9362d017c0d6865e5e761c74f3892550.scope - libcontainer container 951e380e1686980b1aabb838578c2ebb9362d017c0d6865e5e761c74f3892550. Jan 28 01:24:36.009000 audit: BPF prog-id=162 op=LOAD Jan 28 01:24:36.010000 audit: BPF prog-id=163 op=LOAD Jan 28 01:24:36.010000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c238 a2=98 a3=0 items=0 ppid=4145 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935316533383065313638363938306231616162623833383537386332 Jan 28 01:24:36.010000 audit: BPF prog-id=163 op=UNLOAD Jan 28 01:24:36.010000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4145 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935316533383065313638363938306231616162623833383537386332 Jan 28 01:24:36.010000 audit: BPF prog-id=164 op=LOAD Jan 28 01:24:36.010000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c488 a2=98 a3=0 items=0 ppid=4145 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935316533383065313638363938306231616162623833383537386332 Jan 28 01:24:36.010000 audit: BPF prog-id=165 op=LOAD Jan 28 01:24:36.010000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019c218 a2=98 a3=0 items=0 ppid=4145 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935316533383065313638363938306231616162623833383537386332 Jan 28 01:24:36.010000 audit: BPF prog-id=165 op=UNLOAD Jan 28 01:24:36.010000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4145 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935316533383065313638363938306231616162623833383537386332 Jan 28 01:24:36.010000 audit: BPF prog-id=164 op=UNLOAD Jan 28 01:24:36.010000 audit[4156]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4145 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935316533383065313638363938306231616162623833383537386332 Jan 28 01:24:36.010000 audit: BPF prog-id=166 op=LOAD Jan 28 01:24:36.010000 audit[4156]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c6e8 a2=98 a3=0 items=0 ppid=4145 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935316533383065313638363938306231616162623833383537386332 Jan 28 01:24:36.023645 containerd[2559]: time="2026-01-28T01:24:36.023625401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4bcxq,Uid:30203787-2af6-4e48-a9fb-b0d332bd1a32,Namespace:kube-system,Attempt:0,} returns sandbox id \"951e380e1686980b1aabb838578c2ebb9362d017c0d6865e5e761c74f3892550\"" Jan 28 01:24:36.030485 containerd[2559]: time="2026-01-28T01:24:36.030455530Z" level=info msg="CreateContainer within sandbox \"951e380e1686980b1aabb838578c2ebb9362d017c0d6865e5e761c74f3892550\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:24:36.047211 containerd[2559]: time="2026-01-28T01:24:36.047186628Z" level=info msg="Container a5b386133b3ba50042d8ecab02f7b6cce16f2c4093f16aca50c45ea6469dd370: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:36.060371 containerd[2559]: time="2026-01-28T01:24:36.060348865Z" level=info msg="CreateContainer within sandbox \"951e380e1686980b1aabb838578c2ebb9362d017c0d6865e5e761c74f3892550\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a5b386133b3ba50042d8ecab02f7b6cce16f2c4093f16aca50c45ea6469dd370\"" Jan 28 01:24:36.060771 containerd[2559]: time="2026-01-28T01:24:36.060725145Z" level=info msg="StartContainer for \"a5b386133b3ba50042d8ecab02f7b6cce16f2c4093f16aca50c45ea6469dd370\"" Jan 28 01:24:36.062171 containerd[2559]: time="2026-01-28T01:24:36.062132589Z" level=info msg="connecting to shim a5b386133b3ba50042d8ecab02f7b6cce16f2c4093f16aca50c45ea6469dd370" address="unix:///run/containerd/s/7035ef20d86c421a3e906451e17ed133cc50875c0674eb1259a572618f3c1e44" protocol=ttrpc version=3 Jan 28 01:24:36.076015 systemd[1]: Started cri-containerd-a5b386133b3ba50042d8ecab02f7b6cce16f2c4093f16aca50c45ea6469dd370.scope - libcontainer container a5b386133b3ba50042d8ecab02f7b6cce16f2c4093f16aca50c45ea6469dd370. Jan 28 01:24:36.111000 audit: BPF prog-id=167 op=LOAD Jan 28 01:24:36.111000 audit[4181]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4145 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135623338363133336233626135303034326438656361623032663762 Jan 28 01:24:36.111000 audit: BPF prog-id=168 op=LOAD Jan 28 01:24:36.111000 audit[4181]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4145 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135623338363133336233626135303034326438656361623032663762 Jan 28 01:24:36.112000 audit: BPF prog-id=168 op=UNLOAD Jan 28 01:24:36.112000 audit[4181]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4145 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135623338363133336233626135303034326438656361623032663762 Jan 28 01:24:36.112000 audit: BPF prog-id=167 op=UNLOAD Jan 28 01:24:36.112000 audit[4181]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4145 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135623338363133336233626135303034326438656361623032663762 Jan 28 01:24:36.112000 audit: BPF prog-id=169 op=LOAD Jan 28 01:24:36.112000 audit[4181]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4145 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135623338363133336233626135303034326438656361623032663762 Jan 28 01:24:36.132882 containerd[2559]: time="2026-01-28T01:24:36.132486812Z" level=info msg="StartContainer for \"a5b386133b3ba50042d8ecab02f7b6cce16f2c4093f16aca50c45ea6469dd370\" returns successfully" Jan 28 01:24:36.320000 audit[4246]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.320000 audit[4245]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=4245 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.320000 audit[4245]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7aba1670 a2=0 a3=7fff7aba165c items=0 ppid=4194 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:24:36.320000 audit[4246]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeea8a4c70 a2=0 a3=7ffeea8a4c5c items=0 ppid=4194 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 01:24:36.322000 audit[4247]: NETFILTER_CFG table=nat:59 family=10 entries=1 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.322000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd44e240e0 a2=0 a3=7ffd44e240cc items=0 ppid=4194 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:24:36.323000 audit[4249]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=4249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.323000 audit[4250]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.323000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff66f9b00 a2=0 a3=7ffff66f9aec items=0 ppid=4194 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 01:24:36.323000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7126bb80 a2=0 a3=7fff7126bb6c items=0 ppid=4194 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:24:36.324000 audit[4251]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=4251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.324000 audit[4251]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd4647b20 a2=0 a3=7ffcd4647b0c items=0 ppid=4194 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.324000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 01:24:36.427000 audit[4254]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.427000 audit[4254]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd1edbfd10 a2=0 a3=7ffd1edbfcfc items=0 ppid=4194 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:24:36.430000 audit[4256]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.430000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc54e45490 a2=0 a3=7ffc54e4547c items=0 ppid=4194 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.430000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 28 01:24:36.433000 audit[4259]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.433000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeab8f7310 a2=0 a3=7ffeab8f72fc items=0 ppid=4194 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 28 01:24:36.434000 audit[4260]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.434000 audit[4260]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff032bdb10 a2=0 a3=7fff032bdafc items=0 ppid=4194 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:24:36.436000 audit[4262]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.436000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdb667ee0 a2=0 a3=7fffdb667ecc items=0 ppid=4194 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:24:36.437000 audit[4263]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.437000 audit[4263]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe57605380 a2=0 a3=7ffe5760536c items=0 ppid=4194 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.437000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:24:36.439000 audit[4265]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.439000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc710fdf40 a2=0 a3=7ffc710fdf2c items=0 ppid=4194 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.442000 audit[4268]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.442000 audit[4268]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffefd9fa7f0 a2=0 a3=7ffefd9fa7dc items=0 ppid=4194 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.443000 audit[4269]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.443000 audit[4269]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff6ce42270 a2=0 a3=7fff6ce4225c items=0 ppid=4194 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:24:36.445000 audit[4271]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.445000 audit[4271]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe67c5de20 a2=0 a3=7ffe67c5de0c items=0 ppid=4194 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:24:36.445000 audit[4272]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.445000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3e993f30 a2=0 a3=7ffd3e993f1c items=0 ppid=4194 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:24:36.447000 audit[4274]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.447000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa6d2dc90 a2=0 a3=7fffa6d2dc7c items=0 ppid=4194 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 28 01:24:36.450000 audit[4277]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.450000 audit[4277]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd61c37d90 a2=0 a3=7ffd61c37d7c items=0 ppid=4194 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 28 01:24:36.453000 audit[4280]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4280 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.453000 audit[4280]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2c31f270 a2=0 a3=7ffe2c31f25c items=0 ppid=4194 pid=4280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 28 01:24:36.454000 audit[4281]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4281 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.454000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0a82ed00 a2=0 a3=7fff0a82ecec items=0 ppid=4194 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:24:36.455000 audit[4283]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.455000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff25551720 a2=0 a3=7fff2555170c items=0 ppid=4194 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.458000 audit[4286]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4286 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.458000 audit[4286]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc6a2d000 a2=0 a3=7ffdc6a2cfec items=0 ppid=4194 pid=4286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.459000 audit[4287]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.459000 audit[4287]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3b5280f0 a2=0 a3=7ffc3b5280dc items=0 ppid=4194 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:24:36.462000 audit[4289]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 01:24:36.462000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffee6bfd1c0 a2=0 a3=7ffee6bfd1ac items=0 ppid=4194 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.462000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:24:36.526000 audit[4295]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:36.526000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffbd374a70 a2=0 a3=7fffbd374a5c items=0 ppid=4194 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:36.579000 audit[4295]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:36.579000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffbd374a70 a2=0 a3=7fffbd374a5c items=0 ppid=4194 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:36.581000 audit[4300]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.581000 audit[4300]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffe5bbd470 a2=0 a3=7fffe5bbd45c items=0 ppid=4194 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 01:24:36.584000 audit[4302]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4302 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.584000 audit[4302]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe5c62f8c0 a2=0 a3=7ffe5c62f8ac items=0 ppid=4194 pid=4302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 28 01:24:36.587000 audit[4305]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.587000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd09675400 a2=0 a3=7ffd096753ec items=0 ppid=4194 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 28 01:24:36.588000 audit[4306]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.588000 audit[4306]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd218ff340 a2=0 a3=7ffd218ff32c items=0 ppid=4194 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 01:24:36.590000 audit[4308]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.590000 audit[4308]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe83950120 a2=0 a3=7ffe8395010c items=0 ppid=4194 pid=4308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 01:24:36.591000 audit[4309]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.591000 audit[4309]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd725b1530 a2=0 a3=7ffd725b151c items=0 ppid=4194 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 01:24:36.593000 audit[4311]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.593000 audit[4311]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe7800b430 a2=0 a3=7ffe7800b41c items=0 ppid=4194 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.593000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.596000 audit[4314]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.596000 audit[4314]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd852e8a90 a2=0 a3=7ffd852e8a7c items=0 ppid=4194 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.597000 audit[4315]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.597000 audit[4315]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9136e930 a2=0 a3=7fff9136e91c items=0 ppid=4194 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 01:24:36.599000 audit[4317]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.599000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff527210a0 a2=0 a3=7fff5272108c items=0 ppid=4194 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 01:24:36.600000 audit[4318]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.600000 audit[4318]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebcb878d0 a2=0 a3=7ffebcb878bc items=0 ppid=4194 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 01:24:36.602000 audit[4320]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4320 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.602000 audit[4320]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8acffb60 a2=0 a3=7ffe8acffb4c items=0 ppid=4194 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.602000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 28 01:24:36.605000 audit[4323]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.605000 audit[4323]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3c791640 a2=0 a3=7ffc3c79162c items=0 ppid=4194 pid=4323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.605000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 28 01:24:36.607000 audit[4326]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4326 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.607000 audit[4326]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc68b4b650 a2=0 a3=7ffc68b4b63c items=0 ppid=4194 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.607000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 28 01:24:36.608000 audit[4327]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.608000 audit[4327]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc80fab0a0 a2=0 a3=7ffc80fab08c items=0 ppid=4194 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.608000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 01:24:36.610000 audit[4329]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.610000 audit[4329]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff074e44c0 a2=0 a3=7fff074e44ac items=0 ppid=4194 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.613000 audit[4332]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.613000 audit[4332]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2b89b5b0 a2=0 a3=7ffe2b89b59c items=0 ppid=4194 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 01:24:36.614000 audit[4333]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4333 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.614000 audit[4333]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1f9a03f0 a2=0 a3=7fff1f9a03dc items=0 ppid=4194 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.614000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 01:24:36.616000 audit[4335]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.616000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc4138cd20 a2=0 a3=7ffc4138cd0c items=0 ppid=4194 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 01:24:36.617000 audit[4336]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4336 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.617000 audit[4336]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe47377bb0 a2=0 a3=7ffe47377b9c items=0 ppid=4194 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.617000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 01:24:36.618000 audit[4338]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.618000 audit[4338]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdeebee070 a2=0 a3=7ffdeebee05c items=0 ppid=4194 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.618000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:24:36.621000 audit[4341]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4341 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 01:24:36.621000 audit[4341]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd842400d0 a2=0 a3=7ffd842400bc items=0 ppid=4194 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 01:24:36.623000 audit[4343]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4343 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:24:36.623000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe561f9e20 a2=0 a3=7ffe561f9e0c items=0 ppid=4194 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.623000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:36.624000 audit[4343]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4343 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 01:24:36.624000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe561f9e20 a2=0 a3=7ffe561f9e0c items=0 ppid=4194 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:36.624000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:36.707738 kubelet[4040]: I0128 01:24:36.707466 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4bcxq" podStartSLOduration=2.707451206 podStartE2EDuration="2.707451206s" podCreationTimestamp="2026-01-28 01:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:24:36.707346395 +0000 UTC m=+8.150438277" watchObservedRunningTime="2026-01-28 01:24:36.707451206 +0000 UTC m=+8.150543088" Jan 28 01:24:37.224953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685102248.mount: Deactivated successfully. Jan 28 01:24:37.612721 containerd[2559]: time="2026-01-28T01:24:37.612632630Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:37.615344 containerd[2559]: time="2026-01-28T01:24:37.615236675Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=1359" Jan 28 01:24:37.618548 containerd[2559]: time="2026-01-28T01:24:37.618519434Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:37.621880 containerd[2559]: time="2026-01-28T01:24:37.621798119Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:37.622469 containerd[2559]: time="2026-01-28T01:24:37.622195276Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.854186737s" Jan 28 01:24:37.622469 containerd[2559]: time="2026-01-28T01:24:37.622220985Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:24:37.629078 containerd[2559]: time="2026-01-28T01:24:37.629056804Z" level=info msg="CreateContainer within sandbox \"291fb9ba9e7050d014c9fd404d398e47cd82fbe276238aace693470789704ff7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:24:37.648986 containerd[2559]: time="2026-01-28T01:24:37.648936715Z" level=info msg="Container 40c66a39d775ec943ced2d5f6cd99ca6c19f90ee5ef613319e2d0cc1c03facd5: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:37.663825 containerd[2559]: time="2026-01-28T01:24:37.663801535Z" level=info msg="CreateContainer within sandbox \"291fb9ba9e7050d014c9fd404d398e47cd82fbe276238aace693470789704ff7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"40c66a39d775ec943ced2d5f6cd99ca6c19f90ee5ef613319e2d0cc1c03facd5\"" Jan 28 01:24:37.664421 containerd[2559]: time="2026-01-28T01:24:37.664367822Z" level=info msg="StartContainer for \"40c66a39d775ec943ced2d5f6cd99ca6c19f90ee5ef613319e2d0cc1c03facd5\"" Jan 28 01:24:37.665565 containerd[2559]: time="2026-01-28T01:24:37.665381247Z" level=info msg="connecting to shim 40c66a39d775ec943ced2d5f6cd99ca6c19f90ee5ef613319e2d0cc1c03facd5" address="unix:///run/containerd/s/f13f52b7026497d679031292b00aa5280c4121f531f9b1c36d631a7e0a1f3ea8" protocol=ttrpc version=3 Jan 28 01:24:37.682024 systemd[1]: Started cri-containerd-40c66a39d775ec943ced2d5f6cd99ca6c19f90ee5ef613319e2d0cc1c03facd5.scope - libcontainer container 40c66a39d775ec943ced2d5f6cd99ca6c19f90ee5ef613319e2d0cc1c03facd5. Jan 28 01:24:37.688000 audit: BPF prog-id=170 op=LOAD Jan 28 01:24:37.688000 audit: BPF prog-id=171 op=LOAD Jan 28 01:24:37.688000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4100 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430633636613339643737356563393433636564326435663663643939 Jan 28 01:24:37.688000 audit: BPF prog-id=171 op=UNLOAD Jan 28 01:24:37.688000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4100 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430633636613339643737356563393433636564326435663663643939 Jan 28 01:24:37.688000 audit: BPF prog-id=172 op=LOAD Jan 28 01:24:37.688000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4100 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430633636613339643737356563393433636564326435663663643939 Jan 28 01:24:37.688000 audit: BPF prog-id=173 op=LOAD Jan 28 01:24:37.688000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4100 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430633636613339643737356563393433636564326435663663643939 Jan 28 01:24:37.688000 audit: BPF prog-id=173 op=UNLOAD Jan 28 01:24:37.688000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4100 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430633636613339643737356563393433636564326435663663643939 Jan 28 01:24:37.688000 audit: BPF prog-id=172 op=UNLOAD Jan 28 01:24:37.688000 audit[4352]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4100 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430633636613339643737356563393433636564326435663663643939 Jan 28 01:24:37.688000 audit: BPF prog-id=174 op=LOAD Jan 28 01:24:37.688000 audit[4352]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4100 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:37.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430633636613339643737356563393433636564326435663663643939 Jan 28 01:24:37.707904 containerd[2559]: time="2026-01-28T01:24:37.707834894Z" level=info msg="StartContainer for \"40c66a39d775ec943ced2d5f6cd99ca6c19f90ee5ef613319e2d0cc1c03facd5\" returns successfully" Jan 28 01:24:38.726529 kubelet[4040]: I0128 01:24:38.725660 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-vtx6m" podStartSLOduration=1.8703064 podStartE2EDuration="3.725642431s" podCreationTimestamp="2026-01-28 01:24:35 +0000 UTC" firstStartedPulling="2026-01-28 01:24:35.767677877 +0000 UTC m=+7.210769746" lastFinishedPulling="2026-01-28 01:24:37.623013902 +0000 UTC m=+9.066105777" observedRunningTime="2026-01-28 01:24:38.724783343 +0000 UTC m=+10.167875226" watchObservedRunningTime="2026-01-28 01:24:38.725642431 +0000 UTC m=+10.168734313" Jan 28 01:24:43.109846 sudo[3034]: pam_unix(sudo:session): session closed for user root Jan 28 01:24:43.117088 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 28 01:24:43.117164 kernel: audit: type=1106 audit(1769563483.109:528): pid=3034 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:43.109000 audit[3034]: USER_END pid=3034 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:43.116000 audit[3034]: CRED_DISP pid=3034 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:43.123882 kernel: audit: type=1104 audit(1769563483.116:529): pid=3034 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 01:24:43.216652 sshd[3033]: Connection closed by 10.200.16.10 port 32992 Jan 28 01:24:43.216993 sshd-session[3029]: pam_unix(sshd:session): session closed for user core Jan 28 01:24:43.219000 audit[3029]: USER_END pid=3029 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:43.224010 systemd[1]: sshd@6-10.200.8.14:22-10.200.16.10:32992.service: Deactivated successfully. Jan 28 01:24:43.229292 kernel: audit: type=1106 audit(1769563483.219:530): pid=3029 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:43.229208 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:24:43.229650 systemd[1]: session-10.scope: Consumed 3.447s CPU time, 230.1M memory peak. Jan 28 01:24:43.219000 audit[3029]: CRED_DISP pid=3029 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:43.237458 systemd-logind[2535]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:24:43.237976 kernel: audit: type=1104 audit(1769563483.219:531): pid=3029 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:24:43.238409 systemd-logind[2535]: Removed session 10. Jan 28 01:24:43.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.14:22-10.200.16.10:32992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:43.245951 kernel: audit: type=1131 audit(1769563483.223:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.14:22-10.200.16.10:32992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:24:44.106344 kernel: audit: type=1325 audit(1769563484.100:533): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:44.100000 audit[4435]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:44.100000 audit[4435]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf48d36f0 a2=0 a3=7ffcf48d36dc items=0 ppid=4194 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:44.116488 kernel: audit: type=1300 audit(1769563484.100:533): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf48d36f0 a2=0 a3=7ffcf48d36dc items=0 ppid=4194 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:44.100000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:44.128025 kernel: audit: type=1327 audit(1769563484.100:533): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:44.128085 kernel: audit: type=1325 audit(1769563484.107:534): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:44.107000 audit[4435]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4435 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:44.135839 kernel: audit: type=1300 audit(1769563484.107:534): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf48d36f0 a2=0 a3=0 items=0 ppid=4194 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:44.107000 audit[4435]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf48d36f0 a2=0 a3=0 items=0 ppid=4194 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:44.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:45.161000 audit[4437]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:45.161000 audit[4437]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffece9f9f00 a2=0 a3=7ffece9f9eec items=0 ppid=4194 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:45.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:45.167000 audit[4437]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:45.167000 audit[4437]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffece9f9f00 a2=0 a3=0 items=0 ppid=4194 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:45.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:46.178000 audit[4439]: NETFILTER_CFG table=filter:112 family=2 entries=18 op=nft_register_rule pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:46.178000 audit[4439]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd4514f230 a2=0 a3=7ffd4514f21c items=0 ppid=4194 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:46.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:46.182000 audit[4439]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:46.182000 audit[4439]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd4514f230 a2=0 a3=0 items=0 ppid=4194 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:46.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:47.344000 audit[4441]: NETFILTER_CFG table=filter:114 family=2 entries=21 op=nft_register_rule pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:47.344000 audit[4441]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffed19c7490 a2=0 a3=7ffed19c747c items=0 ppid=4194 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:47.349000 audit[4441]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:47.349000 audit[4441]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed19c7490 a2=0 a3=0 items=0 ppid=4194 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:47.391211 systemd[1]: Created slice kubepods-besteffort-pod32033562_e644_4fc2_8488_bb317b46722d.slice - libcontainer container kubepods-besteffort-pod32033562_e644_4fc2_8488_bb317b46722d.slice. Jan 28 01:24:47.459214 kubelet[4040]: I0128 01:24:47.459169 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/32033562-e644-4fc2-8488-bb317b46722d-typha-certs\") pod \"calico-typha-6475496ddc-xl7bj\" (UID: \"32033562-e644-4fc2-8488-bb317b46722d\") " pod="calico-system/calico-typha-6475496ddc-xl7bj" Jan 28 01:24:47.459214 kubelet[4040]: I0128 01:24:47.459205 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpsnx\" (UniqueName: \"kubernetes.io/projected/32033562-e644-4fc2-8488-bb317b46722d-kube-api-access-jpsnx\") pod \"calico-typha-6475496ddc-xl7bj\" (UID: \"32033562-e644-4fc2-8488-bb317b46722d\") " pod="calico-system/calico-typha-6475496ddc-xl7bj" Jan 28 01:24:47.459471 kubelet[4040]: I0128 01:24:47.459234 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32033562-e644-4fc2-8488-bb317b46722d-tigera-ca-bundle\") pod \"calico-typha-6475496ddc-xl7bj\" (UID: \"32033562-e644-4fc2-8488-bb317b46722d\") " pod="calico-system/calico-typha-6475496ddc-xl7bj" Jan 28 01:24:47.595358 systemd[1]: Created slice kubepods-besteffort-pod153750a3_0660_4484_a196_49ed24614fa4.slice - libcontainer container kubepods-besteffort-pod153750a3_0660_4484_a196_49ed24614fa4.slice. Jan 28 01:24:47.661146 kubelet[4040]: I0128 01:24:47.661105 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/153750a3-0660-4484-a196-49ed24614fa4-node-certs\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661146 kubelet[4040]: I0128 01:24:47.661136 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-cni-bin-dir\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661344 kubelet[4040]: I0128 01:24:47.661153 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-policysync\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661344 kubelet[4040]: I0128 01:24:47.661167 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-xtables-lock\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661344 kubelet[4040]: I0128 01:24:47.661184 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/153750a3-0660-4484-a196-49ed24614fa4-tigera-ca-bundle\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661344 kubelet[4040]: I0128 01:24:47.661199 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-cni-log-dir\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661344 kubelet[4040]: I0128 01:24:47.661214 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv5cs\" (UniqueName: \"kubernetes.io/projected/153750a3-0660-4484-a196-49ed24614fa4-kube-api-access-jv5cs\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661432 kubelet[4040]: I0128 01:24:47.661230 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-cni-net-dir\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661432 kubelet[4040]: I0128 01:24:47.661244 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-flexvol-driver-host\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661432 kubelet[4040]: I0128 01:24:47.661258 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-var-lib-calico\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661432 kubelet[4040]: I0128 01:24:47.661273 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-var-run-calico\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.661432 kubelet[4040]: I0128 01:24:47.661288 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/153750a3-0660-4484-a196-49ed24614fa4-lib-modules\") pod \"calico-node-z5vk2\" (UID: \"153750a3-0660-4484-a196-49ed24614fa4\") " pod="calico-system/calico-node-z5vk2" Jan 28 01:24:47.699732 containerd[2559]: time="2026-01-28T01:24:47.699698670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6475496ddc-xl7bj,Uid:32033562-e644-4fc2-8488-bb317b46722d,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:47.742011 containerd[2559]: time="2026-01-28T01:24:47.741966675Z" level=info msg="connecting to shim 04016c5258626e112ce83b6055bf595cfe53d78cec8a94c568d7e18e7598ad16" address="unix:///run/containerd/s/e3c907ddab2c704a568dc34bef305614ac697f81ea6eb4d0b1d8a035eef3ae9d" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:24:47.769311 systemd[1]: Started cri-containerd-04016c5258626e112ce83b6055bf595cfe53d78cec8a94c568d7e18e7598ad16.scope - libcontainer container 04016c5258626e112ce83b6055bf595cfe53d78cec8a94c568d7e18e7598ad16. Jan 28 01:24:47.781193 kubelet[4040]: E0128 01:24:47.781158 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:24:47.790000 audit: BPF prog-id=175 op=LOAD Jan 28 01:24:47.792738 kubelet[4040]: E0128 01:24:47.792646 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.792738 kubelet[4040]: W0128 01:24:47.792667 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.792738 kubelet[4040]: E0128 01:24:47.792683 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.792000 audit: BPF prog-id=176 op=LOAD Jan 28 01:24:47.792000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4452 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303136633532353836323665313132636538336236303535626635 Jan 28 01:24:47.792000 audit: BPF prog-id=176 op=UNLOAD Jan 28 01:24:47.792000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303136633532353836323665313132636538336236303535626635 Jan 28 01:24:47.792000 audit: BPF prog-id=177 op=LOAD Jan 28 01:24:47.792000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4452 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303136633532353836323665313132636538336236303535626635 Jan 28 01:24:47.792000 audit: BPF prog-id=178 op=LOAD Jan 28 01:24:47.792000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4452 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303136633532353836323665313132636538336236303535626635 Jan 28 01:24:47.792000 audit: BPF prog-id=178 op=UNLOAD Jan 28 01:24:47.792000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303136633532353836323665313132636538336236303535626635 Jan 28 01:24:47.792000 audit: BPF prog-id=177 op=UNLOAD Jan 28 01:24:47.792000 audit[4462]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303136633532353836323665313132636538336236303535626635 Jan 28 01:24:47.792000 audit: BPF prog-id=179 op=LOAD Jan 28 01:24:47.792000 audit[4462]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4452 pid=4462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034303136633532353836323665313132636538336236303535626635 Jan 28 01:24:47.829269 containerd[2559]: time="2026-01-28T01:24:47.829226371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6475496ddc-xl7bj,Uid:32033562-e644-4fc2-8488-bb317b46722d,Namespace:calico-system,Attempt:0,} returns sandbox id \"04016c5258626e112ce83b6055bf595cfe53d78cec8a94c568d7e18e7598ad16\"" Jan 28 01:24:47.830355 containerd[2559]: time="2026-01-28T01:24:47.830334992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:24:47.858783 kubelet[4040]: E0128 01:24:47.858730 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.858942 kubelet[4040]: W0128 01:24:47.858928 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.859044 kubelet[4040]: E0128 01:24:47.859006 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.859214 kubelet[4040]: E0128 01:24:47.859193 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.859287 kubelet[4040]: W0128 01:24:47.859259 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.859287 kubelet[4040]: E0128 01:24:47.859270 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.859515 kubelet[4040]: E0128 01:24:47.859472 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.859515 kubelet[4040]: W0128 01:24:47.859481 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.859515 kubelet[4040]: E0128 01:24:47.859489 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.859756 kubelet[4040]: E0128 01:24:47.859738 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.859830 kubelet[4040]: W0128 01:24:47.859746 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.859830 kubelet[4040]: E0128 01:24:47.859796 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.860403 kubelet[4040]: E0128 01:24:47.860036 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.860403 kubelet[4040]: W0128 01:24:47.860047 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.860537 kubelet[4040]: E0128 01:24:47.860056 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.860737 kubelet[4040]: E0128 01:24:47.860718 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.860807 kubelet[4040]: W0128 01:24:47.860775 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.860807 kubelet[4040]: E0128 01:24:47.860787 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.861006 kubelet[4040]: E0128 01:24:47.860972 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.861006 kubelet[4040]: W0128 01:24:47.860979 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.861006 kubelet[4040]: E0128 01:24:47.860989 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.861288 kubelet[4040]: E0128 01:24:47.861245 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.861288 kubelet[4040]: W0128 01:24:47.861256 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.861288 kubelet[4040]: E0128 01:24:47.861266 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.861595 kubelet[4040]: E0128 01:24:47.861545 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.861595 kubelet[4040]: W0128 01:24:47.861554 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.861595 kubelet[4040]: E0128 01:24:47.861563 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.861869 kubelet[4040]: E0128 01:24:47.861831 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.861869 kubelet[4040]: W0128 01:24:47.861841 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.861869 kubelet[4040]: E0128 01:24:47.861850 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.862122 kubelet[4040]: E0128 01:24:47.862087 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.862122 kubelet[4040]: W0128 01:24:47.862095 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.862249 kubelet[4040]: E0128 01:24:47.862199 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.862400 kubelet[4040]: E0128 01:24:47.862389 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.862400 kubelet[4040]: W0128 01:24:47.862398 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.862466 kubelet[4040]: E0128 01:24:47.862409 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.862544 kubelet[4040]: E0128 01:24:47.862533 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.862544 kubelet[4040]: W0128 01:24:47.862541 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.862597 kubelet[4040]: E0128 01:24:47.862549 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.862650 kubelet[4040]: E0128 01:24:47.862641 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.862650 kubelet[4040]: W0128 01:24:47.862648 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.862698 kubelet[4040]: E0128 01:24:47.862654 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.862756 kubelet[4040]: E0128 01:24:47.862745 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.862756 kubelet[4040]: W0128 01:24:47.862752 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.862806 kubelet[4040]: E0128 01:24:47.862758 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.862896 kubelet[4040]: E0128 01:24:47.862882 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.862896 kubelet[4040]: W0128 01:24:47.862893 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863011 kubelet[4040]: E0128 01:24:47.862901 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863056 kubelet[4040]: E0128 01:24:47.863016 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863056 kubelet[4040]: W0128 01:24:47.863021 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863056 kubelet[4040]: E0128 01:24:47.863028 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863200 kubelet[4040]: E0128 01:24:47.863114 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863200 kubelet[4040]: W0128 01:24:47.863119 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863200 kubelet[4040]: E0128 01:24:47.863125 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863264 kubelet[4040]: E0128 01:24:47.863213 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863264 kubelet[4040]: W0128 01:24:47.863217 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863264 kubelet[4040]: E0128 01:24:47.863223 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863326 kubelet[4040]: E0128 01:24:47.863319 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863326 kubelet[4040]: W0128 01:24:47.863324 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863376 kubelet[4040]: E0128 01:24:47.863329 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863492 kubelet[4040]: E0128 01:24:47.863476 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863492 kubelet[4040]: W0128 01:24:47.863484 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863492 kubelet[4040]: E0128 01:24:47.863490 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863573 kubelet[4040]: I0128 01:24:47.863513 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9af8dd1-e2bd-462a-8a21-d0c27cf0950b-kubelet-dir\") pod \"csi-node-driver-wlbng\" (UID: \"d9af8dd1-e2bd-462a-8a21-d0c27cf0950b\") " pod="calico-system/csi-node-driver-wlbng" Jan 28 01:24:47.863641 kubelet[4040]: E0128 01:24:47.863624 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863641 kubelet[4040]: W0128 01:24:47.863631 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863641 kubelet[4040]: E0128 01:24:47.863637 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863747 kubelet[4040]: I0128 01:24:47.863653 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9af8dd1-e2bd-462a-8a21-d0c27cf0950b-socket-dir\") pod \"csi-node-driver-wlbng\" (UID: \"d9af8dd1-e2bd-462a-8a21-d0c27cf0950b\") " pod="calico-system/csi-node-driver-wlbng" Jan 28 01:24:47.863804 kubelet[4040]: E0128 01:24:47.863791 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863804 kubelet[4040]: W0128 01:24:47.863801 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863904 kubelet[4040]: E0128 01:24:47.863810 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.863944 kubelet[4040]: E0128 01:24:47.863939 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.863977 kubelet[4040]: W0128 01:24:47.863945 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.863977 kubelet[4040]: E0128 01:24:47.863952 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864094 kubelet[4040]: E0128 01:24:47.864088 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.864125 kubelet[4040]: W0128 01:24:47.864121 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.864149 kubelet[4040]: E0128 01:24:47.864144 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864188 kubelet[4040]: I0128 01:24:47.864180 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9af8dd1-e2bd-462a-8a21-d0c27cf0950b-varrun\") pod \"csi-node-driver-wlbng\" (UID: \"d9af8dd1-e2bd-462a-8a21-d0c27cf0950b\") " pod="calico-system/csi-node-driver-wlbng" Jan 28 01:24:47.864282 kubelet[4040]: E0128 01:24:47.864275 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.864339 kubelet[4040]: W0128 01:24:47.864307 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.864339 kubelet[4040]: E0128 01:24:47.864313 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864412 kubelet[4040]: E0128 01:24:47.864401 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.864412 kubelet[4040]: W0128 01:24:47.864408 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.864493 kubelet[4040]: E0128 01:24:47.864414 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864516 kubelet[4040]: E0128 01:24:47.864508 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.864516 kubelet[4040]: W0128 01:24:47.864512 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.864605 kubelet[4040]: E0128 01:24:47.864518 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864605 kubelet[4040]: I0128 01:24:47.864535 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh64h\" (UniqueName: \"kubernetes.io/projected/d9af8dd1-e2bd-462a-8a21-d0c27cf0950b-kube-api-access-gh64h\") pod \"csi-node-driver-wlbng\" (UID: \"d9af8dd1-e2bd-462a-8a21-d0c27cf0950b\") " pod="calico-system/csi-node-driver-wlbng" Jan 28 01:24:47.864661 kubelet[4040]: E0128 01:24:47.864650 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.864661 kubelet[4040]: W0128 01:24:47.864659 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.864725 kubelet[4040]: E0128 01:24:47.864666 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864762 kubelet[4040]: E0128 01:24:47.864752 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.864762 kubelet[4040]: W0128 01:24:47.864759 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.864854 kubelet[4040]: E0128 01:24:47.864765 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864854 kubelet[4040]: E0128 01:24:47.864850 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.864854 kubelet[4040]: W0128 01:24:47.864854 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.864854 kubelet[4040]: E0128 01:24:47.864879 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.864854 kubelet[4040]: I0128 01:24:47.864899 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9af8dd1-e2bd-462a-8a21-d0c27cf0950b-registration-dir\") pod \"csi-node-driver-wlbng\" (UID: \"d9af8dd1-e2bd-462a-8a21-d0c27cf0950b\") " pod="calico-system/csi-node-driver-wlbng" Jan 28 01:24:47.865049 kubelet[4040]: E0128 01:24:47.865036 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.865049 kubelet[4040]: W0128 01:24:47.865044 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.865099 kubelet[4040]: E0128 01:24:47.865051 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.865142 kubelet[4040]: E0128 01:24:47.865132 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.865142 kubelet[4040]: W0128 01:24:47.865138 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.865195 kubelet[4040]: E0128 01:24:47.865144 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.865244 kubelet[4040]: E0128 01:24:47.865235 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.865244 kubelet[4040]: W0128 01:24:47.865241 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.865325 kubelet[4040]: E0128 01:24:47.865246 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.865349 kubelet[4040]: E0128 01:24:47.865327 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.865349 kubelet[4040]: W0128 01:24:47.865331 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.865349 kubelet[4040]: E0128 01:24:47.865337 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.902726 containerd[2559]: time="2026-01-28T01:24:47.902702555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z5vk2,Uid:153750a3-0660-4484-a196-49ed24614fa4,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:47.936148 containerd[2559]: time="2026-01-28T01:24:47.936114022Z" level=info msg="connecting to shim fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115" address="unix:///run/containerd/s/1083bb122c1d40d25c788e70eff679d8833144d730d866df2cf9e7924207b3f9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:24:47.954058 systemd[1]: Started cri-containerd-fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115.scope - libcontainer container fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115. Jan 28 01:24:47.960000 audit: BPF prog-id=180 op=LOAD Jan 28 01:24:47.960000 audit: BPF prog-id=181 op=LOAD Jan 28 01:24:47.960000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4548 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663383638666166396637316330373266373135346136336537396561 Jan 28 01:24:47.961000 audit: BPF prog-id=181 op=UNLOAD Jan 28 01:24:47.961000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663383638666166396637316330373266373135346136336537396561 Jan 28 01:24:47.961000 audit: BPF prog-id=182 op=LOAD Jan 28 01:24:47.961000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4548 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663383638666166396637316330373266373135346136336537396561 Jan 28 01:24:47.961000 audit: BPF prog-id=183 op=LOAD Jan 28 01:24:47.961000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4548 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663383638666166396637316330373266373135346136336537396561 Jan 28 01:24:47.961000 audit: BPF prog-id=183 op=UNLOAD Jan 28 01:24:47.961000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663383638666166396637316330373266373135346136336537396561 Jan 28 01:24:47.961000 audit: BPF prog-id=182 op=UNLOAD Jan 28 01:24:47.961000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663383638666166396637316330373266373135346136336537396561 Jan 28 01:24:47.961000 audit: BPF prog-id=184 op=LOAD Jan 28 01:24:47.961000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4548 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:47.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663383638666166396637316330373266373135346136336537396561 Jan 28 01:24:47.966544 kubelet[4040]: E0128 01:24:47.965822 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.966544 kubelet[4040]: W0128 01:24:47.965838 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.966544 kubelet[4040]: E0128 01:24:47.965851 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.966544 kubelet[4040]: E0128 01:24:47.966080 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.966544 kubelet[4040]: W0128 01:24:47.966086 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.966544 kubelet[4040]: E0128 01:24:47.966095 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.966544 kubelet[4040]: E0128 01:24:47.966196 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.966544 kubelet[4040]: W0128 01:24:47.966200 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.966544 kubelet[4040]: E0128 01:24:47.966207 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.966544 kubelet[4040]: E0128 01:24:47.966295 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.966781 kubelet[4040]: W0128 01:24:47.966299 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.966781 kubelet[4040]: E0128 01:24:47.966305 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.966781 kubelet[4040]: E0128 01:24:47.966386 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.966781 kubelet[4040]: W0128 01:24:47.966390 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.966781 kubelet[4040]: E0128 01:24:47.966395 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.967169 kubelet[4040]: E0128 01:24:47.967041 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.967169 kubelet[4040]: W0128 01:24:47.967052 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.967169 kubelet[4040]: E0128 01:24:47.967062 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.967487 kubelet[4040]: E0128 01:24:47.967290 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.967487 kubelet[4040]: W0128 01:24:47.967297 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.967487 kubelet[4040]: E0128 01:24:47.967305 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.967487 kubelet[4040]: E0128 01:24:47.967417 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.967487 kubelet[4040]: W0128 01:24:47.967422 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.967487 kubelet[4040]: E0128 01:24:47.967428 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.967773 kubelet[4040]: E0128 01:24:47.967704 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.967773 kubelet[4040]: W0128 01:24:47.967711 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.967773 kubelet[4040]: E0128 01:24:47.967718 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.968179 kubelet[4040]: E0128 01:24:47.968166 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.968331 kubelet[4040]: W0128 01:24:47.968320 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.968378 kubelet[4040]: E0128 01:24:47.968371 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.968763 kubelet[4040]: E0128 01:24:47.968732 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.968763 kubelet[4040]: W0128 01:24:47.968743 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.968763 kubelet[4040]: E0128 01:24:47.968753 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.970270 kubelet[4040]: E0128 01:24:47.969939 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.970270 kubelet[4040]: W0128 01:24:47.969952 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.970270 kubelet[4040]: E0128 01:24:47.969971 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.970270 kubelet[4040]: E0128 01:24:47.970111 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.970270 kubelet[4040]: W0128 01:24:47.970116 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.970270 kubelet[4040]: E0128 01:24:47.970122 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.970270 kubelet[4040]: E0128 01:24:47.970247 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.970270 kubelet[4040]: W0128 01:24:47.970252 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.970270 kubelet[4040]: E0128 01:24:47.970258 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.970626 kubelet[4040]: E0128 01:24:47.970576 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.970626 kubelet[4040]: W0128 01:24:47.970584 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.970626 kubelet[4040]: E0128 01:24:47.970589 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.970823 kubelet[4040]: E0128 01:24:47.970818 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.970974 kubelet[4040]: W0128 01:24:47.970954 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.970974 kubelet[4040]: E0128 01:24:47.970963 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.971198 kubelet[4040]: E0128 01:24:47.971192 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.971250 kubelet[4040]: W0128 01:24:47.971245 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.971341 kubelet[4040]: E0128 01:24:47.971334 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.972343 kubelet[4040]: E0128 01:24:47.971945 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.973349 kubelet[4040]: W0128 01:24:47.973331 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.973422 kubelet[4040]: E0128 01:24:47.973414 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.973666 kubelet[4040]: E0128 01:24:47.973656 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.973899 kubelet[4040]: W0128 01:24:47.973743 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.973899 kubelet[4040]: E0128 01:24:47.973758 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.974429 kubelet[4040]: E0128 01:24:47.974364 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.974644 kubelet[4040]: W0128 01:24:47.974494 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.974644 kubelet[4040]: E0128 01:24:47.974510 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.974904 kubelet[4040]: E0128 01:24:47.974894 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.975248 kubelet[4040]: W0128 01:24:47.975236 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.975316 kubelet[4040]: E0128 01:24:47.975308 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.976556 kubelet[4040]: E0128 01:24:47.975931 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.976556 kubelet[4040]: W0128 01:24:47.975943 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.976556 kubelet[4040]: E0128 01:24:47.975954 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.976872 kubelet[4040]: E0128 01:24:47.976846 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.977067 kubelet[4040]: W0128 01:24:47.977051 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.977109 kubelet[4040]: E0128 01:24:47.977080 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.980161 kubelet[4040]: E0128 01:24:47.980148 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.980238 kubelet[4040]: W0128 01:24:47.980229 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.980282 kubelet[4040]: E0128 01:24:47.980275 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.980584 kubelet[4040]: E0128 01:24:47.980515 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.980679 kubelet[4040]: W0128 01:24:47.980661 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.980799 kubelet[4040]: E0128 01:24:47.980791 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.981106 kubelet[4040]: E0128 01:24:47.981085 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:47.981152 kubelet[4040]: W0128 01:24:47.981113 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:47.981152 kubelet[4040]: E0128 01:24:47.981126 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:47.988649 containerd[2559]: time="2026-01-28T01:24:47.988622057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z5vk2,Uid:153750a3-0660-4484-a196-49ed24614fa4,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115\"" Jan 28 01:24:48.358000 audit[4612]: NETFILTER_CFG table=filter:116 family=2 entries=22 op=nft_register_rule pid=4612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:48.359976 kernel: kauditd_printk_skb: 63 callbacks suppressed Jan 28 01:24:48.360011 kernel: audit: type=1325 audit(1769563488.358:557): table=filter:116 family=2 entries=22 op=nft_register_rule pid=4612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:48.364941 kernel: audit: type=1300 audit(1769563488.358:557): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffef171b610 a2=0 a3=7ffef171b5fc items=0 ppid=4194 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:48.358000 audit[4612]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffef171b610 a2=0 a3=7ffef171b5fc items=0 ppid=4194 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:48.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:48.367000 audit[4612]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:48.372111 kernel: audit: type=1327 audit(1769563488.358:557): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:48.372145 kernel: audit: type=1325 audit(1769563488.367:558): table=nat:117 family=2 entries=12 op=nft_register_rule pid=4612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:48.367000 audit[4612]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef171b610 a2=0 a3=0 items=0 ppid=4194 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:48.378390 kernel: audit: type=1300 audit(1769563488.367:558): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef171b610 a2=0 a3=0 items=0 ppid=4194 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:48.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:48.382141 kernel: audit: type=1327 audit(1769563488.367:558): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:49.132076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234699327.mount: Deactivated successfully. Jan 28 01:24:49.656581 kubelet[4040]: E0128 01:24:49.656510 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:24:50.085612 containerd[2559]: time="2026-01-28T01:24:50.085573051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:50.087884 containerd[2559]: time="2026-01-28T01:24:50.087728720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 28 01:24:50.089948 containerd[2559]: time="2026-01-28T01:24:50.089924324Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:50.093112 containerd[2559]: time="2026-01-28T01:24:50.093084259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:50.093484 containerd[2559]: time="2026-01-28T01:24:50.093464815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.263101018s" Jan 28 01:24:50.093558 containerd[2559]: time="2026-01-28T01:24:50.093546952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:24:50.095101 containerd[2559]: time="2026-01-28T01:24:50.095068414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:24:50.111270 containerd[2559]: time="2026-01-28T01:24:50.111244101Z" level=info msg="CreateContainer within sandbox \"04016c5258626e112ce83b6055bf595cfe53d78cec8a94c568d7e18e7598ad16\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:24:50.128426 containerd[2559]: time="2026-01-28T01:24:50.127973836Z" level=info msg="Container 4f0ccf33e5d72e907b7ec2a279251f22e5811bb924f09b92a19a179428ccffcd: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:50.146493 containerd[2559]: time="2026-01-28T01:24:50.146462568Z" level=info msg="CreateContainer within sandbox \"04016c5258626e112ce83b6055bf595cfe53d78cec8a94c568d7e18e7598ad16\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4f0ccf33e5d72e907b7ec2a279251f22e5811bb924f09b92a19a179428ccffcd\"" Jan 28 01:24:50.147057 containerd[2559]: time="2026-01-28T01:24:50.146978490Z" level=info msg="StartContainer for \"4f0ccf33e5d72e907b7ec2a279251f22e5811bb924f09b92a19a179428ccffcd\"" Jan 28 01:24:50.148375 containerd[2559]: time="2026-01-28T01:24:50.148338664Z" level=info msg="connecting to shim 4f0ccf33e5d72e907b7ec2a279251f22e5811bb924f09b92a19a179428ccffcd" address="unix:///run/containerd/s/e3c907ddab2c704a568dc34bef305614ac697f81ea6eb4d0b1d8a035eef3ae9d" protocol=ttrpc version=3 Jan 28 01:24:50.167029 systemd[1]: Started cri-containerd-4f0ccf33e5d72e907b7ec2a279251f22e5811bb924f09b92a19a179428ccffcd.scope - libcontainer container 4f0ccf33e5d72e907b7ec2a279251f22e5811bb924f09b92a19a179428ccffcd. Jan 28 01:24:50.178000 audit: BPF prog-id=185 op=LOAD Jan 28 01:24:50.178000 audit: BPF prog-id=186 op=LOAD Jan 28 01:24:50.182175 kernel: audit: type=1334 audit(1769563490.178:559): prog-id=185 op=LOAD Jan 28 01:24:50.182218 kernel: audit: type=1334 audit(1769563490.178:560): prog-id=186 op=LOAD Jan 28 01:24:50.178000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.190035 kernel: audit: type=1300 audit(1769563490.178:560): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.196969 kernel: audit: type=1327 audit(1769563490.178:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.178000 audit: BPF prog-id=186 op=UNLOAD Jan 28 01:24:50.178000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.178000 audit: BPF prog-id=187 op=LOAD Jan 28 01:24:50.178000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.178000 audit: BPF prog-id=188 op=LOAD Jan 28 01:24:50.178000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.178000 audit: BPF prog-id=188 op=UNLOAD Jan 28 01:24:50.178000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.178000 audit: BPF prog-id=187 op=UNLOAD Jan 28 01:24:50.178000 audit[4623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.178000 audit: BPF prog-id=189 op=LOAD Jan 28 01:24:50.178000 audit[4623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4452 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:50.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466306363663333653564373265393037623765633261323739323531 Jan 28 01:24:50.219180 containerd[2559]: time="2026-01-28T01:24:50.218967692Z" level=info msg="StartContainer for \"4f0ccf33e5d72e907b7ec2a279251f22e5811bb924f09b92a19a179428ccffcd\" returns successfully" Jan 28 01:24:50.779440 kubelet[4040]: E0128 01:24:50.779409 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.779440 kubelet[4040]: W0128 01:24:50.779427 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.779955 kubelet[4040]: E0128 01:24:50.779445 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.779955 kubelet[4040]: E0128 01:24:50.779560 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.779955 kubelet[4040]: W0128 01:24:50.779565 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.779955 kubelet[4040]: E0128 01:24:50.779572 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.779955 kubelet[4040]: E0128 01:24:50.779655 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.779955 kubelet[4040]: W0128 01:24:50.779661 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.779955 kubelet[4040]: E0128 01:24:50.779667 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.779955 kubelet[4040]: E0128 01:24:50.779781 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.779955 kubelet[4040]: W0128 01:24:50.779786 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.779955 kubelet[4040]: E0128 01:24:50.779792 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780233 kubelet[4040]: E0128 01:24:50.779885 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780233 kubelet[4040]: W0128 01:24:50.779890 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780233 kubelet[4040]: E0128 01:24:50.779895 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780233 kubelet[4040]: E0128 01:24:50.779972 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780233 kubelet[4040]: W0128 01:24:50.779977 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780233 kubelet[4040]: E0128 01:24:50.779983 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780233 kubelet[4040]: E0128 01:24:50.780056 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780233 kubelet[4040]: W0128 01:24:50.780060 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780233 kubelet[4040]: E0128 01:24:50.780066 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780233 kubelet[4040]: E0128 01:24:50.780142 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780575 kubelet[4040]: W0128 01:24:50.780146 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780575 kubelet[4040]: E0128 01:24:50.780152 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780575 kubelet[4040]: E0128 01:24:50.780234 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780575 kubelet[4040]: W0128 01:24:50.780238 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780575 kubelet[4040]: E0128 01:24:50.780243 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780575 kubelet[4040]: E0128 01:24:50.780310 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780575 kubelet[4040]: W0128 01:24:50.780315 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780575 kubelet[4040]: E0128 01:24:50.780320 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780575 kubelet[4040]: E0128 01:24:50.780392 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780575 kubelet[4040]: W0128 01:24:50.780397 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780831 kubelet[4040]: E0128 01:24:50.780402 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780831 kubelet[4040]: E0128 01:24:50.780472 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780831 kubelet[4040]: W0128 01:24:50.780476 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780831 kubelet[4040]: E0128 01:24:50.780482 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780831 kubelet[4040]: E0128 01:24:50.780555 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780831 kubelet[4040]: W0128 01:24:50.780559 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780831 kubelet[4040]: E0128 01:24:50.780564 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780831 kubelet[4040]: E0128 01:24:50.780637 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780831 kubelet[4040]: W0128 01:24:50.780642 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780831 kubelet[4040]: E0128 01:24:50.780647 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.780976 kubelet[4040]: E0128 01:24:50.780718 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.780976 kubelet[4040]: W0128 01:24:50.780722 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.780976 kubelet[4040]: E0128 01:24:50.780727 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.784043 kubelet[4040]: E0128 01:24:50.784017 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.784043 kubelet[4040]: W0128 01:24:50.784036 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.784145 kubelet[4040]: E0128 01:24:50.784048 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.784252 kubelet[4040]: E0128 01:24:50.784221 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.784252 kubelet[4040]: W0128 01:24:50.784229 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.784252 kubelet[4040]: E0128 01:24:50.784237 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.784420 kubelet[4040]: E0128 01:24:50.784385 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.784420 kubelet[4040]: W0128 01:24:50.784395 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.784420 kubelet[4040]: E0128 01:24:50.784401 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.784689 kubelet[4040]: E0128 01:24:50.784541 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.784689 kubelet[4040]: W0128 01:24:50.784551 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.784689 kubelet[4040]: E0128 01:24:50.784558 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.784689 kubelet[4040]: E0128 01:24:50.784676 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.784689 kubelet[4040]: W0128 01:24:50.784681 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.784689 kubelet[4040]: E0128 01:24:50.784687 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.784825 kubelet[4040]: E0128 01:24:50.784787 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.784825 kubelet[4040]: W0128 01:24:50.784791 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.784825 kubelet[4040]: E0128 01:24:50.784797 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.784937 kubelet[4040]: E0128 01:24:50.784925 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.784937 kubelet[4040]: W0128 01:24:50.784930 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.784937 kubelet[4040]: E0128 01:24:50.784936 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.785303 kubelet[4040]: E0128 01:24:50.785287 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.785303 kubelet[4040]: W0128 01:24:50.785298 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.785385 kubelet[4040]: E0128 01:24:50.785308 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.785454 kubelet[4040]: E0128 01:24:50.785446 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.785492 kubelet[4040]: W0128 01:24:50.785478 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.785492 kubelet[4040]: E0128 01:24:50.785490 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.785618 kubelet[4040]: E0128 01:24:50.785609 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.785648 kubelet[4040]: W0128 01:24:50.785616 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.785648 kubelet[4040]: E0128 01:24:50.785631 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.785786 kubelet[4040]: E0128 01:24:50.785773 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.785823 kubelet[4040]: W0128 01:24:50.785811 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.785823 kubelet[4040]: E0128 01:24:50.785821 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.785952 kubelet[4040]: E0128 01:24:50.785944 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.785952 kubelet[4040]: W0128 01:24:50.785950 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.786999 kubelet[4040]: E0128 01:24:50.785956 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.786999 kubelet[4040]: E0128 01:24:50.786093 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.786999 kubelet[4040]: W0128 01:24:50.786104 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.786999 kubelet[4040]: E0128 01:24:50.786109 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.786999 kubelet[4040]: E0128 01:24:50.786274 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.786999 kubelet[4040]: W0128 01:24:50.786288 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.786999 kubelet[4040]: E0128 01:24:50.786294 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.786999 kubelet[4040]: E0128 01:24:50.786405 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.786999 kubelet[4040]: W0128 01:24:50.786421 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.786999 kubelet[4040]: E0128 01:24:50.786425 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.787137 kubelet[4040]: E0128 01:24:50.786563 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.787137 kubelet[4040]: W0128 01:24:50.786580 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.787137 kubelet[4040]: E0128 01:24:50.786591 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.787137 kubelet[4040]: E0128 01:24:50.786782 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.787137 kubelet[4040]: W0128 01:24:50.786786 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.787137 kubelet[4040]: E0128 01:24:50.786791 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:50.787297 kubelet[4040]: E0128 01:24:50.787272 4040 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:24:50.787323 kubelet[4040]: W0128 01:24:50.787296 4040 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:24:50.787323 kubelet[4040]: E0128 01:24:50.787305 4040 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:24:51.406666 containerd[2559]: time="2026-01-28T01:24:51.406627806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:51.408918 containerd[2559]: time="2026-01-28T01:24:51.408872975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 28 01:24:51.411448 containerd[2559]: time="2026-01-28T01:24:51.411407671Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:51.414921 containerd[2559]: time="2026-01-28T01:24:51.414879843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:51.415322 containerd[2559]: time="2026-01-28T01:24:51.415199327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.32009774s" Jan 28 01:24:51.415322 containerd[2559]: time="2026-01-28T01:24:51.415227259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:24:51.421620 containerd[2559]: time="2026-01-28T01:24:51.421587103Z" level=info msg="CreateContainer within sandbox \"fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:24:51.441355 containerd[2559]: time="2026-01-28T01:24:51.441181809Z" level=info msg="Container cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:51.456350 containerd[2559]: time="2026-01-28T01:24:51.456323578Z" level=info msg="CreateContainer within sandbox \"fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098\"" Jan 28 01:24:51.456774 containerd[2559]: time="2026-01-28T01:24:51.456751903Z" level=info msg="StartContainer for \"cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098\"" Jan 28 01:24:51.458133 containerd[2559]: time="2026-01-28T01:24:51.458107812Z" level=info msg="connecting to shim cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098" address="unix:///run/containerd/s/1083bb122c1d40d25c788e70eff679d8833144d730d866df2cf9e7924207b3f9" protocol=ttrpc version=3 Jan 28 01:24:51.481041 systemd[1]: Started cri-containerd-cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098.scope - libcontainer container cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098. Jan 28 01:24:51.511000 audit: BPF prog-id=190 op=LOAD Jan 28 01:24:51.511000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4548 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:51.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362366439333064343536383535366566616161643734626662333032 Jan 28 01:24:51.511000 audit: BPF prog-id=191 op=LOAD Jan 28 01:24:51.511000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4548 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:51.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362366439333064343536383535366566616161643734626662333032 Jan 28 01:24:51.511000 audit: BPF prog-id=191 op=UNLOAD Jan 28 01:24:51.511000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:51.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362366439333064343536383535366566616161643734626662333032 Jan 28 01:24:51.511000 audit: BPF prog-id=190 op=UNLOAD Jan 28 01:24:51.511000 audit[4698]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:51.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362366439333064343536383535366566616161643734626662333032 Jan 28 01:24:51.511000 audit: BPF prog-id=192 op=LOAD Jan 28 01:24:51.511000 audit[4698]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4548 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:51.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362366439333064343536383535366566616161643734626662333032 Jan 28 01:24:51.533372 containerd[2559]: time="2026-01-28T01:24:51.533343127Z" level=info msg="StartContainer for \"cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098\" returns successfully" Jan 28 01:24:51.540544 systemd[1]: cri-containerd-cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098.scope: Deactivated successfully. Jan 28 01:24:51.541000 audit: BPF prog-id=192 op=UNLOAD Jan 28 01:24:51.543321 containerd[2559]: time="2026-01-28T01:24:51.543248801Z" level=info msg="received container exit event container_id:\"cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098\" id:\"cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098\" pid:4711 exited_at:{seconds:1769563491 nanos:542480518}" Jan 28 01:24:51.557654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb6d930d4568556efaaad74bfb3027d97ca2c67bb048c5e98873cd03c25a9098-rootfs.mount: Deactivated successfully. Jan 28 01:24:51.657268 kubelet[4040]: E0128 01:24:51.657192 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:24:51.741731 kubelet[4040]: I0128 01:24:51.741713 4040 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:24:51.774420 kubelet[4040]: I0128 01:24:51.759002 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6475496ddc-xl7bj" podStartSLOduration=2.494633711 podStartE2EDuration="4.758985474s" podCreationTimestamp="2026-01-28 01:24:47 +0000 UTC" firstStartedPulling="2026-01-28 01:24:47.829996898 +0000 UTC m=+19.273088774" lastFinishedPulling="2026-01-28 01:24:50.094348662 +0000 UTC m=+21.537440537" observedRunningTime="2026-01-28 01:24:50.748050721 +0000 UTC m=+22.191142624" watchObservedRunningTime="2026-01-28 01:24:51.758985474 +0000 UTC m=+23.202077363" Jan 28 01:24:53.657225 kubelet[4040]: E0128 01:24:53.657180 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:24:53.748554 containerd[2559]: time="2026-01-28T01:24:53.747716827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:24:55.656678 kubelet[4040]: E0128 01:24:55.656633 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:24:56.187770 containerd[2559]: time="2026-01-28T01:24:56.187722673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:56.194314 containerd[2559]: time="2026-01-28T01:24:56.194236731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 28 01:24:56.194515 containerd[2559]: time="2026-01-28T01:24:56.194404114Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:56.197720 containerd[2559]: time="2026-01-28T01:24:56.197693401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:24:56.198215 containerd[2559]: time="2026-01-28T01:24:56.198086592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.450310327s" Jan 28 01:24:56.198215 containerd[2559]: time="2026-01-28T01:24:56.198113146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:24:56.203532 containerd[2559]: time="2026-01-28T01:24:56.203505604Z" level=info msg="CreateContainer within sandbox \"fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:24:56.221011 containerd[2559]: time="2026-01-28T01:24:56.220986635Z" level=info msg="Container 98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:24:56.235199 containerd[2559]: time="2026-01-28T01:24:56.235174195Z" level=info msg="CreateContainer within sandbox \"fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff\"" Jan 28 01:24:56.236891 containerd[2559]: time="2026-01-28T01:24:56.235758975Z" level=info msg="StartContainer for \"98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff\"" Jan 28 01:24:56.237169 containerd[2559]: time="2026-01-28T01:24:56.237145074Z" level=info msg="connecting to shim 98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff" address="unix:///run/containerd/s/1083bb122c1d40d25c788e70eff679d8833144d730d866df2cf9e7924207b3f9" protocol=ttrpc version=3 Jan 28 01:24:56.262061 systemd[1]: Started cri-containerd-98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff.scope - libcontainer container 98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff. Jan 28 01:24:56.301000 audit: BPF prog-id=193 op=LOAD Jan 28 01:24:56.304425 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 01:24:56.304473 kernel: audit: type=1334 audit(1769563496.301:573): prog-id=193 op=LOAD Jan 28 01:24:56.301000 audit[4756]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.311109 kernel: audit: type=1300 audit(1769563496.301:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.316876 kernel: audit: type=1327 audit(1769563496.301:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.303000 audit: BPF prog-id=194 op=LOAD Jan 28 01:24:56.320178 kernel: audit: type=1334 audit(1769563496.303:574): prog-id=194 op=LOAD Jan 28 01:24:56.303000 audit[4756]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.331100 kernel: audit: type=1300 audit(1769563496.303:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.332933 kernel: audit: type=1327 audit(1769563496.303:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.303000 audit: BPF prog-id=194 op=UNLOAD Jan 28 01:24:56.340469 kernel: audit: type=1334 audit(1769563496.303:575): prog-id=194 op=UNLOAD Jan 28 01:24:56.303000 audit[4756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.345292 kernel: audit: type=1300 audit(1769563496.303:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.350545 kernel: audit: type=1327 audit(1769563496.303:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.353183 kernel: audit: type=1334 audit(1769563496.303:576): prog-id=193 op=UNLOAD Jan 28 01:24:56.303000 audit: BPF prog-id=193 op=UNLOAD Jan 28 01:24:56.303000 audit[4756]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.303000 audit: BPF prog-id=195 op=LOAD Jan 28 01:24:56.303000 audit[4756]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4548 pid=4756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:56.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938343332623332393863383438333666343037376235636637343836 Jan 28 01:24:56.356485 containerd[2559]: time="2026-01-28T01:24:56.356464156Z" level=info msg="StartContainer for \"98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff\" returns successfully" Jan 28 01:24:57.421413 systemd[1]: cri-containerd-98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff.scope: Deactivated successfully. Jan 28 01:24:57.421695 systemd[1]: cri-containerd-98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff.scope: Consumed 378ms CPU time, 196M memory peak, 171.3M written to disk. Jan 28 01:24:57.423331 containerd[2559]: time="2026-01-28T01:24:57.423298420Z" level=info msg="received container exit event container_id:\"98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff\" id:\"98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff\" pid:4768 exited_at:{seconds:1769563497 nanos:423071814}" Jan 28 01:24:57.423000 audit: BPF prog-id=195 op=UNLOAD Jan 28 01:24:57.440717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98432b3298c84836f4077b5cf7486a09931eb9f0ea9d4b2ad055fd0604dc44ff-rootfs.mount: Deactivated successfully. Jan 28 01:24:57.455832 kubelet[4040]: I0128 01:24:57.455809 4040 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 28 01:24:57.627459 kubelet[4040]: I0128 01:24:57.627437 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24495090-d436-41d1-b444-e32503155f3d-config-volume\") pod \"coredns-66bc5c9577-7z2xz\" (UID: \"24495090-d436-41d1-b444-e32503155f3d\") " pod="kube-system/coredns-66bc5c9577-7z2xz" Jan 28 01:24:57.628217 kubelet[4040]: I0128 01:24:57.627570 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vcvk\" (UniqueName: \"kubernetes.io/projected/24495090-d436-41d1-b444-e32503155f3d-kube-api-access-9vcvk\") pod \"coredns-66bc5c9577-7z2xz\" (UID: \"24495090-d436-41d1-b444-e32503155f3d\") " pod="kube-system/coredns-66bc5c9577-7z2xz" Jan 28 01:24:57.627662 systemd[1]: Created slice kubepods-burstable-pod24495090_d436_41d1_b444_e32503155f3d.slice - libcontainer container kubepods-burstable-pod24495090_d436_41d1_b444_e32503155f3d.slice. Jan 28 01:24:57.790629 systemd[1]: Created slice kubepods-burstable-podabcd1edf_2334_4947_82f0_ae69c3925ca7.slice - libcontainer container kubepods-burstable-podabcd1edf_2334_4947_82f0_ae69c3925ca7.slice. Jan 28 01:24:57.919474 kubelet[4040]: I0128 01:24:57.828535 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abcd1edf-2334-4947-82f0-ae69c3925ca7-config-volume\") pod \"coredns-66bc5c9577-dj8xj\" (UID: \"abcd1edf-2334-4947-82f0-ae69c3925ca7\") " pod="kube-system/coredns-66bc5c9577-dj8xj" Jan 28 01:24:57.919474 kubelet[4040]: I0128 01:24:57.828662 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79m7q\" (UniqueName: \"kubernetes.io/projected/abcd1edf-2334-4947-82f0-ae69c3925ca7-kube-api-access-79m7q\") pod \"coredns-66bc5c9577-dj8xj\" (UID: \"abcd1edf-2334-4947-82f0-ae69c3925ca7\") " pod="kube-system/coredns-66bc5c9577-dj8xj" Jan 28 01:24:57.936212 systemd[1]: Created slice kubepods-besteffort-pod846cab91_e0a1_4344_ab2b_9358f550d758.slice - libcontainer container kubepods-besteffort-pod846cab91_e0a1_4344_ab2b_9358f550d758.slice. Jan 28 01:24:57.944075 systemd[1]: Created slice kubepods-besteffort-podd9af8dd1_e2bd_462a_8a21_d0c27cf0950b.slice - libcontainer container kubepods-besteffort-podd9af8dd1_e2bd_462a_8a21_d0c27cf0950b.slice. Jan 28 01:24:58.029627 kubelet[4040]: I0128 01:24:58.029600 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m697\" (UniqueName: \"kubernetes.io/projected/846cab91-e0a1-4344-ab2b-9358f550d758-kube-api-access-6m697\") pod \"calico-kube-controllers-d9c776fd4-lg5zf\" (UID: \"846cab91-e0a1-4344-ab2b-9358f550d758\") " pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" Jan 28 01:24:58.029627 kubelet[4040]: I0128 01:24:58.029636 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/846cab91-e0a1-4344-ab2b-9358f550d758-tigera-ca-bundle\") pod \"calico-kube-controllers-d9c776fd4-lg5zf\" (UID: \"846cab91-e0a1-4344-ab2b-9358f550d758\") " pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" Jan 28 01:24:58.320309 containerd[2559]: time="2026-01-28T01:24:58.320097590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7z2xz,Uid:24495090-d436-41d1-b444-e32503155f3d,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:58.326421 systemd[1]: Created slice kubepods-besteffort-podf2bad97b_be33_4a5d_908e_d2048d5b9f4f.slice - libcontainer container kubepods-besteffort-podf2bad97b_be33_4a5d_908e_d2048d5b9f4f.slice. Jan 28 01:24:58.329035 containerd[2559]: time="2026-01-28T01:24:58.329010611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wlbng,Uid:d9af8dd1-e2bd-462a-8a21-d0c27cf0950b,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:58.330039 containerd[2559]: time="2026-01-28T01:24:58.330019155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dj8xj,Uid:abcd1edf-2334-4947-82f0-ae69c3925ca7,Namespace:kube-system,Attempt:0,}" Jan 28 01:24:58.331584 kubelet[4040]: I0128 01:24:58.331561 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hsn\" (UniqueName: \"kubernetes.io/projected/d99b8c9d-ad76-485a-94c4-e2c93263797f-kube-api-access-r4hsn\") pod \"calico-apiserver-848997c984-zqhg5\" (UID: \"d99b8c9d-ad76-485a-94c4-e2c93263797f\") " pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" Jan 28 01:24:58.331921 kubelet[4040]: I0128 01:24:58.331898 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d99b8c9d-ad76-485a-94c4-e2c93263797f-calico-apiserver-certs\") pod \"calico-apiserver-848997c984-zqhg5\" (UID: \"d99b8c9d-ad76-485a-94c4-e2c93263797f\") " pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" Jan 28 01:24:58.334392 kubelet[4040]: I0128 01:24:58.331927 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv677\" (UniqueName: \"kubernetes.io/projected/f2bad97b-be33-4a5d-908e-d2048d5b9f4f-kube-api-access-tv677\") pod \"calico-apiserver-848997c984-fdfk6\" (UID: \"f2bad97b-be33-4a5d-908e-d2048d5b9f4f\") " pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" Jan 28 01:24:58.334392 kubelet[4040]: I0128 01:24:58.331949 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2bad97b-be33-4a5d-908e-d2048d5b9f4f-calico-apiserver-certs\") pod \"calico-apiserver-848997c984-fdfk6\" (UID: \"f2bad97b-be33-4a5d-908e-d2048d5b9f4f\") " pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" Jan 28 01:24:58.335076 systemd[1]: Created slice kubepods-besteffort-podd99b8c9d_ad76_485a_94c4_e2c93263797f.slice - libcontainer container kubepods-besteffort-podd99b8c9d_ad76_485a_94c4_e2c93263797f.slice. Jan 28 01:24:58.351558 systemd[1]: Created slice kubepods-besteffort-pod0c67e031_b534_4000_aced_a57efd446db7.slice - libcontainer container kubepods-besteffort-pod0c67e031_b534_4000_aced_a57efd446db7.slice. Jan 28 01:24:58.364117 systemd[1]: Created slice kubepods-besteffort-pod82792313_f307_46ab_a25e_04cde981d984.slice - libcontainer container kubepods-besteffort-pod82792313_f307_46ab_a25e_04cde981d984.slice. Jan 28 01:24:58.416286 containerd[2559]: time="2026-01-28T01:24:58.416261247Z" level=error msg="Failed to destroy network for sandbox \"6f1824f2612f9addbd43dccf13c17767e9c3c398f7bfdb9380bd9076b2f96994\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.421344 containerd[2559]: time="2026-01-28T01:24:58.421312031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7z2xz,Uid:24495090-d436-41d1-b444-e32503155f3d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f1824f2612f9addbd43dccf13c17767e9c3c398f7bfdb9380bd9076b2f96994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.421613 kubelet[4040]: E0128 01:24:58.421588 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f1824f2612f9addbd43dccf13c17767e9c3c398f7bfdb9380bd9076b2f96994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.421991 kubelet[4040]: E0128 01:24:58.421975 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f1824f2612f9addbd43dccf13c17767e9c3c398f7bfdb9380bd9076b2f96994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7z2xz" Jan 28 01:24:58.422064 kubelet[4040]: E0128 01:24:58.422053 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f1824f2612f9addbd43dccf13c17767e9c3c398f7bfdb9380bd9076b2f96994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7z2xz" Jan 28 01:24:58.422149 kubelet[4040]: E0128 01:24:58.422134 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7z2xz_kube-system(24495090-d436-41d1-b444-e32503155f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7z2xz_kube-system(24495090-d436-41d1-b444-e32503155f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f1824f2612f9addbd43dccf13c17767e9c3c398f7bfdb9380bd9076b2f96994\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7z2xz" podUID="24495090-d436-41d1-b444-e32503155f3d" Jan 28 01:24:58.436106 containerd[2559]: time="2026-01-28T01:24:58.435880769Z" level=error msg="Failed to destroy network for sandbox \"e5ff21381beb4d206c38203b6d810420989eed2a998d5409ef5bce1e21740280\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.442395 containerd[2559]: time="2026-01-28T01:24:58.442365074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dj8xj,Uid:abcd1edf-2334-4947-82f0-ae69c3925ca7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5ff21381beb4d206c38203b6d810420989eed2a998d5409ef5bce1e21740280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.442662 systemd[1]: run-netns-cni\x2d284524da\x2db1fd\x2dc397\x2da8cf\x2d45714670fbae.mount: Deactivated successfully. Jan 28 01:24:58.445145 containerd[2559]: time="2026-01-28T01:24:58.445071636Z" level=error msg="Failed to destroy network for sandbox \"bf63f16e09bcad2eea00cd0b7a41d951056a5b4533f54cf02e3f302776ac3641\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.445591 kubelet[4040]: I0128 01:24:58.445551 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82792313-f307-46ab-a25e-04cde981d984-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-zl6z4\" (UID: \"82792313-f307-46ab-a25e-04cde981d984\") " pod="calico-system/goldmane-7c778bb748-zl6z4" Jan 28 01:24:58.446264 kubelet[4040]: I0128 01:24:58.445833 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tk2\" (UniqueName: \"kubernetes.io/projected/0c67e031-b534-4000-aced-a57efd446db7-kube-api-access-f4tk2\") pod \"whisker-58dffdf864-lbnvw\" (UID: \"0c67e031-b534-4000-aced-a57efd446db7\") " pod="calico-system/whisker-58dffdf864-lbnvw" Jan 28 01:24:58.446264 kubelet[4040]: I0128 01:24:58.445897 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/82792313-f307-46ab-a25e-04cde981d984-goldmane-key-pair\") pod \"goldmane-7c778bb748-zl6z4\" (UID: \"82792313-f307-46ab-a25e-04cde981d984\") " pod="calico-system/goldmane-7c778bb748-zl6z4" Jan 28 01:24:58.446264 kubelet[4040]: I0128 01:24:58.445914 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2tp\" (UniqueName: \"kubernetes.io/projected/82792313-f307-46ab-a25e-04cde981d984-kube-api-access-9p2tp\") pod \"goldmane-7c778bb748-zl6z4\" (UID: \"82792313-f307-46ab-a25e-04cde981d984\") " pod="calico-system/goldmane-7c778bb748-zl6z4" Jan 28 01:24:58.446264 kubelet[4040]: I0128 01:24:58.445955 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c67e031-b534-4000-aced-a57efd446db7-whisker-ca-bundle\") pod \"whisker-58dffdf864-lbnvw\" (UID: \"0c67e031-b534-4000-aced-a57efd446db7\") " pod="calico-system/whisker-58dffdf864-lbnvw" Jan 28 01:24:58.446264 kubelet[4040]: I0128 01:24:58.445985 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82792313-f307-46ab-a25e-04cde981d984-config\") pod \"goldmane-7c778bb748-zl6z4\" (UID: \"82792313-f307-46ab-a25e-04cde981d984\") " pod="calico-system/goldmane-7c778bb748-zl6z4" Jan 28 01:24:58.446408 kubelet[4040]: I0128 01:24:58.446008 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c67e031-b534-4000-aced-a57efd446db7-whisker-backend-key-pair\") pod \"whisker-58dffdf864-lbnvw\" (UID: \"0c67e031-b534-4000-aced-a57efd446db7\") " pod="calico-system/whisker-58dffdf864-lbnvw" Jan 28 01:24:58.447224 kubelet[4040]: E0128 01:24:58.445732 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5ff21381beb4d206c38203b6d810420989eed2a998d5409ef5bce1e21740280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.447315 kubelet[4040]: E0128 01:24:58.447301 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5ff21381beb4d206c38203b6d810420989eed2a998d5409ef5bce1e21740280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dj8xj" Jan 28 01:24:58.447368 kubelet[4040]: E0128 01:24:58.447357 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5ff21381beb4d206c38203b6d810420989eed2a998d5409ef5bce1e21740280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dj8xj" Jan 28 01:24:58.447444 kubelet[4040]: E0128 01:24:58.447428 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dj8xj_kube-system(abcd1edf-2334-4947-82f0-ae69c3925ca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dj8xj_kube-system(abcd1edf-2334-4947-82f0-ae69c3925ca7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5ff21381beb4d206c38203b6d810420989eed2a998d5409ef5bce1e21740280\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dj8xj" podUID="abcd1edf-2334-4947-82f0-ae69c3925ca7" Jan 28 01:24:58.449574 systemd[1]: run-netns-cni\x2d1def4dd1\x2d6fe0\x2d10e0\x2d3c57\x2d321da568729b.mount: Deactivated successfully. Jan 28 01:24:58.452697 containerd[2559]: time="2026-01-28T01:24:58.452616014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wlbng,Uid:d9af8dd1-e2bd-462a-8a21-d0c27cf0950b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf63f16e09bcad2eea00cd0b7a41d951056a5b4533f54cf02e3f302776ac3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.454877 kubelet[4040]: E0128 01:24:58.453421 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf63f16e09bcad2eea00cd0b7a41d951056a5b4533f54cf02e3f302776ac3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.455206 kubelet[4040]: E0128 01:24:58.455118 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf63f16e09bcad2eea00cd0b7a41d951056a5b4533f54cf02e3f302776ac3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wlbng" Jan 28 01:24:58.455206 kubelet[4040]: E0128 01:24:58.455139 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf63f16e09bcad2eea00cd0b7a41d951056a5b4533f54cf02e3f302776ac3641\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wlbng" Jan 28 01:24:58.455206 kubelet[4040]: E0128 01:24:58.455184 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf63f16e09bcad2eea00cd0b7a41d951056a5b4533f54cf02e3f302776ac3641\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:24:58.509884 kubelet[4040]: I0128 01:24:58.509793 4040 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 01:24:58.533000 audit[4888]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:58.533000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee65a7ad0 a2=0 a3=7ffee65a7abc items=0 ppid=4194 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:58.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:58.537000 audit[4888]: NETFILTER_CFG table=nat:119 family=2 entries=19 op=nft_register_chain pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:24:58.537000 audit[4888]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffee65a7ad0 a2=0 a3=7ffee65a7abc items=0 ppid=4194 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:24:58.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:24:58.548873 containerd[2559]: time="2026-01-28T01:24:58.548627326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9c776fd4-lg5zf,Uid:846cab91-e0a1-4344-ab2b-9358f550d758,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:58.589592 containerd[2559]: time="2026-01-28T01:24:58.589170212Z" level=error msg="Failed to destroy network for sandbox \"c5908f6ece85d91d0a9e3bc91872bd35195bf4b19546936664f75e559944fa4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.593486 containerd[2559]: time="2026-01-28T01:24:58.593448195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9c776fd4-lg5zf,Uid:846cab91-e0a1-4344-ab2b-9358f550d758,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5908f6ece85d91d0a9e3bc91872bd35195bf4b19546936664f75e559944fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.593596 kubelet[4040]: E0128 01:24:58.593583 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5908f6ece85d91d0a9e3bc91872bd35195bf4b19546936664f75e559944fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.593630 kubelet[4040]: E0128 01:24:58.593612 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5908f6ece85d91d0a9e3bc91872bd35195bf4b19546936664f75e559944fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" Jan 28 01:24:58.593665 kubelet[4040]: E0128 01:24:58.593628 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5908f6ece85d91d0a9e3bc91872bd35195bf4b19546936664f75e559944fa4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" Jan 28 01:24:58.593693 kubelet[4040]: E0128 01:24:58.593665 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d9c776fd4-lg5zf_calico-system(846cab91-e0a1-4344-ab2b-9358f550d758)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d9c776fd4-lg5zf_calico-system(846cab91-e0a1-4344-ab2b-9358f550d758)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5908f6ece85d91d0a9e3bc91872bd35195bf4b19546936664f75e559944fa4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:24:58.640494 containerd[2559]: time="2026-01-28T01:24:58.640466380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-fdfk6,Uid:f2bad97b-be33-4a5d-908e-d2048d5b9f4f,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:24:58.655121 containerd[2559]: time="2026-01-28T01:24:58.655101140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-zqhg5,Uid:d99b8c9d-ad76-485a-94c4-e2c93263797f,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:24:58.659833 containerd[2559]: time="2026-01-28T01:24:58.659810146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dffdf864-lbnvw,Uid:0c67e031-b534-4000-aced-a57efd446db7,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:58.674770 containerd[2559]: time="2026-01-28T01:24:58.674737539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zl6z4,Uid:82792313-f307-46ab-a25e-04cde981d984,Namespace:calico-system,Attempt:0,}" Jan 28 01:24:58.688337 containerd[2559]: time="2026-01-28T01:24:58.688301610Z" level=error msg="Failed to destroy network for sandbox \"7e9a0a81e70bd6f1661f51b7fb3ace56a379f95b9ce82771a2096a1f1eb7ed9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.695988 containerd[2559]: time="2026-01-28T01:24:58.695916237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-fdfk6,Uid:f2bad97b-be33-4a5d-908e-d2048d5b9f4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e9a0a81e70bd6f1661f51b7fb3ace56a379f95b9ce82771a2096a1f1eb7ed9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.696246 kubelet[4040]: E0128 01:24:58.696219 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e9a0a81e70bd6f1661f51b7fb3ace56a379f95b9ce82771a2096a1f1eb7ed9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.696295 kubelet[4040]: E0128 01:24:58.696257 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e9a0a81e70bd6f1661f51b7fb3ace56a379f95b9ce82771a2096a1f1eb7ed9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" Jan 28 01:24:58.696295 kubelet[4040]: E0128 01:24:58.696273 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e9a0a81e70bd6f1661f51b7fb3ace56a379f95b9ce82771a2096a1f1eb7ed9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" Jan 28 01:24:58.696346 kubelet[4040]: E0128 01:24:58.696317 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-848997c984-fdfk6_calico-apiserver(f2bad97b-be33-4a5d-908e-d2048d5b9f4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-848997c984-fdfk6_calico-apiserver(f2bad97b-be33-4a5d-908e-d2048d5b9f4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e9a0a81e70bd6f1661f51b7fb3ace56a379f95b9ce82771a2096a1f1eb7ed9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:24:58.749847 containerd[2559]: time="2026-01-28T01:24:58.749809822Z" level=error msg="Failed to destroy network for sandbox \"4a2ed33d4f161c0f33858f7c2822a0bcee7524b7d966b958c41ada56add1da0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.750216 containerd[2559]: time="2026-01-28T01:24:58.750193465Z" level=error msg="Failed to destroy network for sandbox \"7cf9787a4af5e382e91e4a0403f38a49963c963ce0168b6e1b6433c0af9b1224\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.751963 containerd[2559]: time="2026-01-28T01:24:58.751939683Z" level=error msg="Failed to destroy network for sandbox \"6ce6f0a5487450decdc1a32cf6b28ffeae1413adf37c9f422f8172d226a1f2f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.755643 containerd[2559]: time="2026-01-28T01:24:58.755607301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-zqhg5,Uid:d99b8c9d-ad76-485a-94c4-e2c93263797f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2ed33d4f161c0f33858f7c2822a0bcee7524b7d966b958c41ada56add1da0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.755933 kubelet[4040]: E0128 01:24:58.755840 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2ed33d4f161c0f33858f7c2822a0bcee7524b7d966b958c41ada56add1da0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.756313 kubelet[4040]: E0128 01:24:58.756235 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2ed33d4f161c0f33858f7c2822a0bcee7524b7d966b958c41ada56add1da0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" Jan 28 01:24:58.756313 kubelet[4040]: E0128 01:24:58.756253 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a2ed33d4f161c0f33858f7c2822a0bcee7524b7d966b958c41ada56add1da0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" Jan 28 01:24:58.756400 kubelet[4040]: E0128 01:24:58.756382 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-848997c984-zqhg5_calico-apiserver(d99b8c9d-ad76-485a-94c4-e2c93263797f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-848997c984-zqhg5_calico-apiserver(d99b8c9d-ad76-485a-94c4-e2c93263797f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a2ed33d4f161c0f33858f7c2822a0bcee7524b7d966b958c41ada56add1da0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:24:58.762703 containerd[2559]: time="2026-01-28T01:24:58.762662860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zl6z4,Uid:82792313-f307-46ab-a25e-04cde981d984,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce6f0a5487450decdc1a32cf6b28ffeae1413adf37c9f422f8172d226a1f2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.763295 kubelet[4040]: E0128 01:24:58.763272 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce6f0a5487450decdc1a32cf6b28ffeae1413adf37c9f422f8172d226a1f2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.763527 kubelet[4040]: E0128 01:24:58.763391 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce6f0a5487450decdc1a32cf6b28ffeae1413adf37c9f422f8172d226a1f2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zl6z4" Jan 28 01:24:58.763527 kubelet[4040]: E0128 01:24:58.763409 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ce6f0a5487450decdc1a32cf6b28ffeae1413adf37c9f422f8172d226a1f2f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zl6z4" Jan 28 01:24:58.763527 kubelet[4040]: E0128 01:24:58.763473 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-zl6z4_calico-system(82792313-f307-46ab-a25e-04cde981d984)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-zl6z4_calico-system(82792313-f307-46ab-a25e-04cde981d984)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ce6f0a5487450decdc1a32cf6b28ffeae1413adf37c9f422f8172d226a1f2f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:24:58.764230 containerd[2559]: time="2026-01-28T01:24:58.764164132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:24:58.764971 containerd[2559]: time="2026-01-28T01:24:58.764608740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dffdf864-lbnvw,Uid:0c67e031-b534-4000-aced-a57efd446db7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf9787a4af5e382e91e4a0403f38a49963c963ce0168b6e1b6433c0af9b1224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.765088 kubelet[4040]: E0128 01:24:58.765035 4040 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf9787a4af5e382e91e4a0403f38a49963c963ce0168b6e1b6433c0af9b1224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:24:58.765088 kubelet[4040]: E0128 01:24:58.765065 4040 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf9787a4af5e382e91e4a0403f38a49963c963ce0168b6e1b6433c0af9b1224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58dffdf864-lbnvw" Jan 28 01:24:58.765088 kubelet[4040]: E0128 01:24:58.765083 4040 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cf9787a4af5e382e91e4a0403f38a49963c963ce0168b6e1b6433c0af9b1224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58dffdf864-lbnvw" Jan 28 01:24:58.765175 kubelet[4040]: E0128 01:24:58.765131 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58dffdf864-lbnvw_calico-system(0c67e031-b534-4000-aced-a57efd446db7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58dffdf864-lbnvw_calico-system(0c67e031-b534-4000-aced-a57efd446db7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cf9787a4af5e382e91e4a0403f38a49963c963ce0168b6e1b6433c0af9b1224\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58dffdf864-lbnvw" podUID="0c67e031-b534-4000-aced-a57efd446db7" Jan 28 01:25:02.992641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3529374973.mount: Deactivated successfully. Jan 28 01:25:03.016005 containerd[2559]: time="2026-01-28T01:25:03.015966179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:03.018341 containerd[2559]: time="2026-01-28T01:25:03.018273777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 28 01:25:03.020520 containerd[2559]: time="2026-01-28T01:25:03.020497911Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:03.023723 containerd[2559]: time="2026-01-28T01:25:03.023376307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:25:03.023723 containerd[2559]: time="2026-01-28T01:25:03.023633618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.259217838s" Jan 28 01:25:03.023723 containerd[2559]: time="2026-01-28T01:25:03.023655233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:25:03.040050 containerd[2559]: time="2026-01-28T01:25:03.040020665Z" level=info msg="CreateContainer within sandbox \"fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:25:03.055901 containerd[2559]: time="2026-01-28T01:25:03.053743321Z" level=info msg="Container fb509ddb9b18b3b8fabf193e3584b70ccf93fafa5f61517c4b7c8698fdf65b67: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:25:03.076059 containerd[2559]: time="2026-01-28T01:25:03.076033679Z" level=info msg="CreateContainer within sandbox \"fc868faf9f71c072f7154a63e79ea0971b1a509dc8968898e546d1312514f115\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fb509ddb9b18b3b8fabf193e3584b70ccf93fafa5f61517c4b7c8698fdf65b67\"" Jan 28 01:25:03.076469 containerd[2559]: time="2026-01-28T01:25:03.076452157Z" level=info msg="StartContainer for \"fb509ddb9b18b3b8fabf193e3584b70ccf93fafa5f61517c4b7c8698fdf65b67\"" Jan 28 01:25:03.077761 containerd[2559]: time="2026-01-28T01:25:03.077734706Z" level=info msg="connecting to shim fb509ddb9b18b3b8fabf193e3584b70ccf93fafa5f61517c4b7c8698fdf65b67" address="unix:///run/containerd/s/1083bb122c1d40d25c788e70eff679d8833144d730d866df2cf9e7924207b3f9" protocol=ttrpc version=3 Jan 28 01:25:03.093029 systemd[1]: Started cri-containerd-fb509ddb9b18b3b8fabf193e3584b70ccf93fafa5f61517c4b7c8698fdf65b67.scope - libcontainer container fb509ddb9b18b3b8fabf193e3584b70ccf93fafa5f61517c4b7c8698fdf65b67. Jan 28 01:25:03.138000 audit: BPF prog-id=196 op=LOAD Jan 28 01:25:03.140249 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 01:25:03.140320 kernel: audit: type=1334 audit(1769563503.138:581): prog-id=196 op=LOAD Jan 28 01:25:03.138000 audit[5035]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.144753 kernel: audit: type=1300 audit(1769563503.138:581): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.150486 kernel: audit: type=1327 audit(1769563503.138:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.151813 kernel: audit: type=1334 audit(1769563503.138:582): prog-id=197 op=LOAD Jan 28 01:25:03.138000 audit: BPF prog-id=197 op=LOAD Jan 28 01:25:03.138000 audit[5035]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.160012 kernel: audit: type=1300 audit(1769563503.138:582): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.160076 kernel: audit: type=1327 audit(1769563503.138:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.161643 kernel: audit: type=1334 audit(1769563503.138:583): prog-id=197 op=UNLOAD Jan 28 01:25:03.138000 audit: BPF prog-id=197 op=UNLOAD Jan 28 01:25:03.138000 audit[5035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.167872 kernel: audit: type=1300 audit(1769563503.138:583): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.179687 kernel: audit: type=1327 audit(1769563503.138:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.180996 kernel: audit: type=1334 audit(1769563503.138:584): prog-id=196 op=UNLOAD Jan 28 01:25:03.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.138000 audit: BPF prog-id=196 op=UNLOAD Jan 28 01:25:03.138000 audit[5035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.138000 audit: BPF prog-id=198 op=LOAD Jan 28 01:25:03.138000 audit[5035]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4548 pid=5035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:03.138000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662353039646462396231386233623866616266313933653335383462 Jan 28 01:25:03.186983 containerd[2559]: time="2026-01-28T01:25:03.186834801Z" level=info msg="StartContainer for \"fb509ddb9b18b3b8fabf193e3584b70ccf93fafa5f61517c4b7c8698fdf65b67\" returns successfully" Jan 28 01:25:03.440281 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:25:03.440357 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:25:03.580438 kubelet[4040]: I0128 01:25:03.580411 4040 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4tk2\" (UniqueName: \"kubernetes.io/projected/0c67e031-b534-4000-aced-a57efd446db7-kube-api-access-f4tk2\") pod \"0c67e031-b534-4000-aced-a57efd446db7\" (UID: \"0c67e031-b534-4000-aced-a57efd446db7\") " Jan 28 01:25:03.581048 kubelet[4040]: I0128 01:25:03.581034 4040 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c67e031-b534-4000-aced-a57efd446db7-whisker-backend-key-pair\") pod \"0c67e031-b534-4000-aced-a57efd446db7\" (UID: \"0c67e031-b534-4000-aced-a57efd446db7\") " Jan 28 01:25:03.581304 kubelet[4040]: I0128 01:25:03.581129 4040 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c67e031-b534-4000-aced-a57efd446db7-whisker-ca-bundle\") pod \"0c67e031-b534-4000-aced-a57efd446db7\" (UID: \"0c67e031-b534-4000-aced-a57efd446db7\") " Jan 28 01:25:03.586374 kubelet[4040]: I0128 01:25:03.585319 4040 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c67e031-b534-4000-aced-a57efd446db7-kube-api-access-f4tk2" (OuterVolumeSpecName: "kube-api-access-f4tk2") pod "0c67e031-b534-4000-aced-a57efd446db7" (UID: "0c67e031-b534-4000-aced-a57efd446db7"). InnerVolumeSpecName "kube-api-access-f4tk2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:25:03.586374 kubelet[4040]: I0128 01:25:03.585660 4040 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c67e031-b534-4000-aced-a57efd446db7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0c67e031-b534-4000-aced-a57efd446db7" (UID: "0c67e031-b534-4000-aced-a57efd446db7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:25:03.586843 kubelet[4040]: I0128 01:25:03.586821 4040 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c67e031-b534-4000-aced-a57efd446db7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0c67e031-b534-4000-aced-a57efd446db7" (UID: "0c67e031-b534-4000-aced-a57efd446db7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:25:03.683811 kubelet[4040]: I0128 01:25:03.683789 4040 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4tk2\" (UniqueName: \"kubernetes.io/projected/0c67e031-b534-4000-aced-a57efd446db7-kube-api-access-f4tk2\") on node \"ci-4593.0.0-n-2270f1152e\" DevicePath \"\"" Jan 28 01:25:03.683811 kubelet[4040]: I0128 01:25:03.683812 4040 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0c67e031-b534-4000-aced-a57efd446db7-whisker-backend-key-pair\") on node \"ci-4593.0.0-n-2270f1152e\" DevicePath \"\"" Jan 28 01:25:03.683916 kubelet[4040]: I0128 01:25:03.683821 4040 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c67e031-b534-4000-aced-a57efd446db7-whisker-ca-bundle\") on node \"ci-4593.0.0-n-2270f1152e\" DevicePath \"\"" Jan 28 01:25:03.780142 systemd[1]: Removed slice kubepods-besteffort-pod0c67e031_b534_4000_aced_a57efd446db7.slice - libcontainer container kubepods-besteffort-pod0c67e031_b534_4000_aced_a57efd446db7.slice. Jan 28 01:25:03.797299 kubelet[4040]: I0128 01:25:03.796798 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z5vk2" podStartSLOduration=1.762326634 podStartE2EDuration="16.796784938s" podCreationTimestamp="2026-01-28 01:24:47 +0000 UTC" firstStartedPulling="2026-01-28 01:24:47.989834074 +0000 UTC m=+19.432925962" lastFinishedPulling="2026-01-28 01:25:03.024292392 +0000 UTC m=+34.467384266" observedRunningTime="2026-01-28 01:25:03.796173885 +0000 UTC m=+35.239265768" watchObservedRunningTime="2026-01-28 01:25:03.796784938 +0000 UTC m=+35.239876823" Jan 28 01:25:03.861129 systemd[1]: Created slice kubepods-besteffort-poda2440718_b2ad_4d13_a123_aa3f90357d80.slice - libcontainer container kubepods-besteffort-poda2440718_b2ad_4d13_a123_aa3f90357d80.slice. Jan 28 01:25:03.884698 kubelet[4040]: I0128 01:25:03.884678 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2440718-b2ad-4d13-a123-aa3f90357d80-whisker-ca-bundle\") pod \"whisker-84b945c4fd-89dj9\" (UID: \"a2440718-b2ad-4d13-a123-aa3f90357d80\") " pod="calico-system/whisker-84b945c4fd-89dj9" Jan 28 01:25:03.884825 kubelet[4040]: I0128 01:25:03.884813 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzcgw\" (UniqueName: \"kubernetes.io/projected/a2440718-b2ad-4d13-a123-aa3f90357d80-kube-api-access-jzcgw\") pod \"whisker-84b945c4fd-89dj9\" (UID: \"a2440718-b2ad-4d13-a123-aa3f90357d80\") " pod="calico-system/whisker-84b945c4fd-89dj9" Jan 28 01:25:03.884991 kubelet[4040]: I0128 01:25:03.884979 4040 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2440718-b2ad-4d13-a123-aa3f90357d80-whisker-backend-key-pair\") pod \"whisker-84b945c4fd-89dj9\" (UID: \"a2440718-b2ad-4d13-a123-aa3f90357d80\") " pod="calico-system/whisker-84b945c4fd-89dj9" Jan 28 01:25:03.995028 systemd[1]: var-lib-kubelet-pods-0c67e031\x2db534\x2d4000\x2daced\x2da57efd446db7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df4tk2.mount: Deactivated successfully. Jan 28 01:25:03.995284 systemd[1]: var-lib-kubelet-pods-0c67e031\x2db534\x2d4000\x2daced\x2da57efd446db7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:25:04.170136 containerd[2559]: time="2026-01-28T01:25:04.170055436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84b945c4fd-89dj9,Uid:a2440718-b2ad-4d13-a123-aa3f90357d80,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:04.303698 systemd-networkd[2200]: calibc57d233c4d: Link UP Jan 28 01:25:04.304348 systemd-networkd[2200]: calibc57d233c4d: Gained carrier Jan 28 01:25:04.316074 containerd[2559]: 2026-01-28 01:25:04.204 [INFO][5125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:25:04.316074 containerd[2559]: 2026-01-28 01:25:04.213 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0 whisker-84b945c4fd- calico-system a2440718-b2ad-4d13-a123-aa3f90357d80 873 0 2026-01-28 01:25:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84b945c4fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e whisker-84b945c4fd-89dj9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibc57d233c4d [] [] }} ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-" Jan 28 01:25:04.316074 containerd[2559]: 2026-01-28 01:25:04.213 [INFO][5125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" Jan 28 01:25:04.316074 containerd[2559]: 2026-01-28 01:25:04.232 [INFO][5137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" HandleID="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Workload="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.232 [INFO][5137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" HandleID="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Workload="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593.0.0-n-2270f1152e", "pod":"whisker-84b945c4fd-89dj9", "timestamp":"2026-01-28 01:25:04.232800072 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.232 [INFO][5137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.233 [INFO][5137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.233 [INFO][5137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.237 [INFO][5137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.239 [INFO][5137] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.241 [INFO][5137] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.242 [INFO][5137] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316271 containerd[2559]: 2026-01-28 01:25:04.244 [INFO][5137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316522 containerd[2559]: 2026-01-28 01:25:04.244 [INFO][5137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316522 containerd[2559]: 2026-01-28 01:25:04.245 [INFO][5137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62 Jan 28 01:25:04.316522 containerd[2559]: 2026-01-28 01:25:04.249 [INFO][5137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316522 containerd[2559]: 2026-01-28 01:25:04.256 [INFO][5137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.129/26] block=192.168.125.128/26 handle="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316522 containerd[2559]: 2026-01-28 01:25:04.257 [INFO][5137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.129/26] handle="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:04.316522 containerd[2559]: 2026-01-28 01:25:04.257 [INFO][5137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:04.316522 containerd[2559]: 2026-01-28 01:25:04.257 [INFO][5137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.129/26] IPv6=[] ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" HandleID="k8s-pod-network.aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Workload="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" Jan 28 01:25:04.316688 containerd[2559]: 2026-01-28 01:25:04.259 [INFO][5125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0", GenerateName:"whisker-84b945c4fd-", Namespace:"calico-system", SelfLink:"", UID:"a2440718-b2ad-4d13-a123-aa3f90357d80", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84b945c4fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"whisker-84b945c4fd-89dj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc57d233c4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:04.316688 containerd[2559]: 2026-01-28 01:25:04.259 [INFO][5125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.129/32] ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" Jan 28 01:25:04.316811 containerd[2559]: 2026-01-28 01:25:04.259 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc57d233c4d ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" Jan 28 01:25:04.316811 containerd[2559]: 2026-01-28 01:25:04.304 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" Jan 28 01:25:04.316889 containerd[2559]: 2026-01-28 01:25:04.304 [INFO][5125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0", GenerateName:"whisker-84b945c4fd-", Namespace:"calico-system", SelfLink:"", UID:"a2440718-b2ad-4d13-a123-aa3f90357d80", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 25, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84b945c4fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62", Pod:"whisker-84b945c4fd-89dj9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc57d233c4d", MAC:"e6:0e:55:73:da:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:04.316961 containerd[2559]: 2026-01-28 01:25:04.313 [INFO][5125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" Namespace="calico-system" Pod="whisker-84b945c4fd-89dj9" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-whisker--84b945c4fd--89dj9-eth0" Jan 28 01:25:04.351328 containerd[2559]: time="2026-01-28T01:25:04.351225349Z" level=info msg="connecting to shim aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62" address="unix:///run/containerd/s/5b208ab992ccfbb37c3028c780f9100f4b75eadef3843ce42f4e05a261b9dfd8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:04.378026 systemd[1]: Started cri-containerd-aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62.scope - libcontainer container aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62. Jan 28 01:25:04.384000 audit: BPF prog-id=199 op=LOAD Jan 28 01:25:04.385000 audit: BPF prog-id=200 op=LOAD Jan 28 01:25:04.385000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5159 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:04.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161656230313963356464376430346132643730643037666231306630 Jan 28 01:25:04.385000 audit: BPF prog-id=200 op=UNLOAD Jan 28 01:25:04.385000 audit[5170]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:04.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161656230313963356464376430346132643730643037666231306630 Jan 28 01:25:04.385000 audit: BPF prog-id=201 op=LOAD Jan 28 01:25:04.385000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5159 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:04.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161656230313963356464376430346132643730643037666231306630 Jan 28 01:25:04.385000 audit: BPF prog-id=202 op=LOAD Jan 28 01:25:04.385000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5159 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:04.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161656230313963356464376430346132643730643037666231306630 Jan 28 01:25:04.385000 audit: BPF prog-id=202 op=UNLOAD Jan 28 01:25:04.385000 audit[5170]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:04.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161656230313963356464376430346132643730643037666231306630 Jan 28 01:25:04.385000 audit: BPF prog-id=201 op=UNLOAD Jan 28 01:25:04.385000 audit[5170]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5159 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:04.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161656230313963356464376430346132643730643037666231306630 Jan 28 01:25:04.385000 audit: BPF prog-id=203 op=LOAD Jan 28 01:25:04.385000 audit[5170]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5159 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:04.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161656230313963356464376430346132643730643037666231306630 Jan 28 01:25:04.414729 containerd[2559]: time="2026-01-28T01:25:04.414683694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84b945c4fd-89dj9,Uid:a2440718-b2ad-4d13-a123-aa3f90357d80,Namespace:calico-system,Attempt:0,} returns sandbox id \"aaeb019c5dd7d04a2d70d07fb10f0940bd3842e68eaa2a83f05c3e7c1368ca62\"" Jan 28 01:25:04.416075 containerd[2559]: time="2026-01-28T01:25:04.416058717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:25:04.659279 kubelet[4040]: I0128 01:25:04.659256 4040 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c67e031-b534-4000-aced-a57efd446db7" path="/var/lib/kubelet/pods/0c67e031-b534-4000-aced-a57efd446db7/volumes" Jan 28 01:25:04.669484 containerd[2559]: time="2026-01-28T01:25:04.669451482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:04.672247 containerd[2559]: time="2026-01-28T01:25:04.672187877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:25:04.672247 containerd[2559]: time="2026-01-28T01:25:04.672232186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:04.672359 kubelet[4040]: E0128 01:25:04.672329 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:25:04.672412 kubelet[4040]: E0128 01:25:04.672368 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:25:04.672476 kubelet[4040]: E0128 01:25:04.672438 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:04.673235 containerd[2559]: time="2026-01-28T01:25:04.673182493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:25:04.919341 containerd[2559]: time="2026-01-28T01:25:04.918993091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:04.921286 containerd[2559]: time="2026-01-28T01:25:04.921245367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:25:04.921369 containerd[2559]: time="2026-01-28T01:25:04.921334662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:04.921579 kubelet[4040]: E0128 01:25:04.921549 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:25:04.921625 kubelet[4040]: E0128 01:25:04.921588 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:25:04.921692 kubelet[4040]: E0128 01:25:04.921677 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:04.921968 kubelet[4040]: E0128 01:25:04.921936 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:25:05.062000 audit: BPF prog-id=204 op=LOAD Jan 28 01:25:05.062000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbf77e220 a2=98 a3=1fffffffffffffff items=0 ppid=5220 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:25:05.062000 audit: BPF prog-id=204 op=UNLOAD Jan 28 01:25:05.062000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdbf77e1f0 a3=0 items=0 ppid=5220 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:25:05.062000 audit: BPF prog-id=205 op=LOAD Jan 28 01:25:05.062000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbf77e100 a2=94 a3=3 items=0 ppid=5220 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:25:05.062000 audit: BPF prog-id=205 op=UNLOAD Jan 28 01:25:05.062000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdbf77e100 a2=94 a3=3 items=0 ppid=5220 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:25:05.062000 audit: BPF prog-id=206 op=LOAD Jan 28 01:25:05.062000 audit[5345]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdbf77e140 a2=94 a3=7ffdbf77e320 items=0 ppid=5220 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:25:05.062000 audit: BPF prog-id=206 op=UNLOAD Jan 28 01:25:05.062000 audit[5345]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdbf77e140 a2=94 a3=7ffdbf77e320 items=0 ppid=5220 pid=5345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 01:25:05.063000 audit: BPF prog-id=207 op=LOAD Jan 28 01:25:05.063000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd30c06e80 a2=98 a3=3 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.063000 audit: BPF prog-id=207 op=UNLOAD Jan 28 01:25:05.063000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd30c06e50 a3=0 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.063000 audit: BPF prog-id=208 op=LOAD Jan 28 01:25:05.063000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd30c06c70 a2=94 a3=54428f items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.063000 audit: BPF prog-id=208 op=UNLOAD Jan 28 01:25:05.063000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd30c06c70 a2=94 a3=54428f items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.063000 audit: BPF prog-id=209 op=LOAD Jan 28 01:25:05.063000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd30c06ca0 a2=94 a3=2 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.063000 audit: BPF prog-id=209 op=UNLOAD Jan 28 01:25:05.063000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd30c06ca0 a2=0 a3=2 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.063000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.174000 audit: BPF prog-id=210 op=LOAD Jan 28 01:25:05.174000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd30c06b60 a2=94 a3=1 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.174000 audit: BPF prog-id=210 op=UNLOAD Jan 28 01:25:05.174000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd30c06b60 a2=94 a3=1 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.183000 audit: BPF prog-id=211 op=LOAD Jan 28 01:25:05.183000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd30c06b50 a2=94 a3=4 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.183000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.183000 audit: BPF prog-id=211 op=UNLOAD Jan 28 01:25:05.183000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd30c06b50 a2=0 a3=4 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.183000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.183000 audit: BPF prog-id=212 op=LOAD Jan 28 01:25:05.183000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd30c069b0 a2=94 a3=5 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.183000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.183000 audit: BPF prog-id=212 op=UNLOAD Jan 28 01:25:05.183000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd30c069b0 a2=0 a3=5 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.183000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.183000 audit: BPF prog-id=213 op=LOAD Jan 28 01:25:05.183000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd30c06bd0 a2=94 a3=6 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.183000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.183000 audit: BPF prog-id=213 op=UNLOAD Jan 28 01:25:05.183000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd30c06bd0 a2=0 a3=6 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.183000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.184000 audit: BPF prog-id=214 op=LOAD Jan 28 01:25:05.184000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd30c06380 a2=94 a3=88 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.184000 audit: BPF prog-id=215 op=LOAD Jan 28 01:25:05.184000 audit[5346]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd30c06200 a2=94 a3=2 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.184000 audit: BPF prog-id=215 op=UNLOAD Jan 28 01:25:05.184000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd30c06230 a2=0 a3=7ffd30c06330 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.184000 audit: BPF prog-id=214 op=UNLOAD Jan 28 01:25:05.184000 audit[5346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2ae50d10 a2=0 a3=6867e350c1993fb2 items=0 ppid=5220 pid=5346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 01:25:05.190000 audit: BPF prog-id=216 op=LOAD Jan 28 01:25:05.190000 audit[5349]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe33be06a0 a2=98 a3=1999999999999999 items=0 ppid=5220 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:25:05.190000 audit: BPF prog-id=216 op=UNLOAD Jan 28 01:25:05.190000 audit[5349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe33be0670 a3=0 items=0 ppid=5220 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:25:05.190000 audit: BPF prog-id=217 op=LOAD Jan 28 01:25:05.190000 audit[5349]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe33be0580 a2=94 a3=ffff items=0 ppid=5220 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:25:05.190000 audit: BPF prog-id=217 op=UNLOAD Jan 28 01:25:05.190000 audit[5349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe33be0580 a2=94 a3=ffff items=0 ppid=5220 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:25:05.190000 audit: BPF prog-id=218 op=LOAD Jan 28 01:25:05.190000 audit[5349]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe33be05c0 a2=94 a3=7ffe33be07a0 items=0 ppid=5220 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:25:05.190000 audit: BPF prog-id=218 op=UNLOAD Jan 28 01:25:05.190000 audit[5349]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe33be05c0 a2=94 a3=7ffe33be07a0 items=0 ppid=5220 pid=5349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.190000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 01:25:05.274096 systemd-networkd[2200]: vxlan.calico: Link UP Jan 28 01:25:05.274103 systemd-networkd[2200]: vxlan.calico: Gained carrier Jan 28 01:25:05.294000 audit: BPF prog-id=219 op=LOAD Jan 28 01:25:05.294000 audit[5374]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3f7733a0 a2=98 a3=0 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.294000 audit: BPF prog-id=219 op=UNLOAD Jan 28 01:25:05.294000 audit[5374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe3f773370 a3=0 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.294000 audit: BPF prog-id=220 op=LOAD Jan 28 01:25:05.294000 audit[5374]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3f7731b0 a2=94 a3=54428f items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.294000 audit: BPF prog-id=220 op=UNLOAD Jan 28 01:25:05.294000 audit[5374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe3f7731b0 a2=94 a3=54428f items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.294000 audit: BPF prog-id=221 op=LOAD Jan 28 01:25:05.294000 audit[5374]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3f7731e0 a2=94 a3=2 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.294000 audit: BPF prog-id=221 op=UNLOAD Jan 28 01:25:05.294000 audit[5374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe3f7731e0 a2=0 a3=2 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.295000 audit: BPF prog-id=222 op=LOAD Jan 28 01:25:05.295000 audit[5374]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe3f772f90 a2=94 a3=4 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.295000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.295000 audit: BPF prog-id=222 op=UNLOAD Jan 28 01:25:05.295000 audit[5374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe3f772f90 a2=94 a3=4 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.295000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.295000 audit: BPF prog-id=223 op=LOAD Jan 28 01:25:05.295000 audit[5374]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe3f773090 a2=94 a3=7ffe3f773210 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.295000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.296000 audit: BPF prog-id=223 op=UNLOAD Jan 28 01:25:05.296000 audit[5374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe3f773090 a2=0 a3=7ffe3f773210 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.296000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.296000 audit: BPF prog-id=224 op=LOAD Jan 28 01:25:05.296000 audit[5374]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe3f7727c0 a2=94 a3=2 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.296000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.296000 audit: BPF prog-id=224 op=UNLOAD Jan 28 01:25:05.296000 audit[5374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe3f7727c0 a2=0 a3=2 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.296000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.296000 audit: BPF prog-id=225 op=LOAD Jan 28 01:25:05.296000 audit[5374]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe3f7728c0 a2=94 a3=30 items=0 ppid=5220 pid=5374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.296000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 01:25:05.304000 audit: BPF prog-id=226 op=LOAD Jan 28 01:25:05.304000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1a4e0ea0 a2=98 a3=0 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.304000 audit: BPF prog-id=226 op=UNLOAD Jan 28 01:25:05.304000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff1a4e0e70 a3=0 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.304000 audit: BPF prog-id=227 op=LOAD Jan 28 01:25:05.304000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1a4e0c90 a2=94 a3=54428f items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.304000 audit: BPF prog-id=227 op=UNLOAD Jan 28 01:25:05.304000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1a4e0c90 a2=94 a3=54428f items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.304000 audit: BPF prog-id=228 op=LOAD Jan 28 01:25:05.304000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1a4e0cc0 a2=94 a3=2 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.304000 audit: BPF prog-id=228 op=UNLOAD Jan 28 01:25:05.304000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1a4e0cc0 a2=0 a3=2 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.304000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.414000 audit: BPF prog-id=229 op=LOAD Jan 28 01:25:05.414000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1a4e0b80 a2=94 a3=1 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.414000 audit: BPF prog-id=229 op=UNLOAD Jan 28 01:25:05.414000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff1a4e0b80 a2=94 a3=1 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.414000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=230 op=LOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1a4e0b70 a2=94 a3=4 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=230 op=UNLOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff1a4e0b70 a2=0 a3=4 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=231 op=LOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff1a4e09d0 a2=94 a3=5 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=231 op=UNLOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff1a4e09d0 a2=0 a3=5 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=232 op=LOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1a4e0bf0 a2=94 a3=6 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=232 op=UNLOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff1a4e0bf0 a2=0 a3=6 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=233 op=LOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff1a4e03a0 a2=94 a3=88 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=234 op=LOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff1a4e0220 a2=94 a3=2 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.422000 audit: BPF prog-id=234 op=UNLOAD Jan 28 01:25:05.422000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff1a4e0250 a2=0 a3=7fff1a4e0350 items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.422000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.423000 audit: BPF prog-id=233 op=UNLOAD Jan 28 01:25:05.423000 audit[5378]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=39694d10 a2=0 a3=43f9a59b08fdbcad items=0 ppid=5220 pid=5378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 01:25:05.429000 audit: BPF prog-id=225 op=UNLOAD Jan 28 01:25:05.429000 audit[5220]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0007ca3c0 a2=0 a3=0 items=0 ppid=5203 pid=5220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.429000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 01:25:05.445989 systemd-networkd[2200]: calibc57d233c4d: Gained IPv6LL Jan 28 01:25:05.518000 audit[5403]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=5403 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:05.518000 audit[5403]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff627693b0 a2=0 a3=7fff6276939c items=0 ppid=5220 pid=5403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.518000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:05.519000 audit[5404]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:05.519000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe9a74e900 a2=0 a3=7ffe9a74e8ec items=0 ppid=5220 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:05.540000 audit[5402]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=5402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:05.540000 audit[5402]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffaa855b50 a2=0 a3=7fffaa855b3c items=0 ppid=5220 pid=5402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.540000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:05.542000 audit[5406]: NETFILTER_CFG table=filter:123 family=2 entries=94 op=nft_register_chain pid=5406 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:05.542000 audit[5406]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7fff6efe25e0 a2=0 a3=7fff6efe25cc items=0 ppid=5220 pid=5406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.542000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:05.779024 kubelet[4040]: E0128 01:25:05.778980 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:25:05.800000 audit[5419]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:05.800000 audit[5419]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde6a98f60 a2=0 a3=7ffde6a98f4c items=0 ppid=4194 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:05.805000 audit[5419]: NETFILTER_CFG table=nat:125 family=2 entries=14 op=nft_register_rule pid=5419 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:05.805000 audit[5419]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffde6a98f60 a2=0 a3=0 items=0 ppid=4194 pid=5419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:05.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:06.342096 systemd-networkd[2200]: vxlan.calico: Gained IPv6LL Jan 28 01:25:09.663506 containerd[2559]: time="2026-01-28T01:25:09.663449514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-zqhg5,Uid:d99b8c9d-ad76-485a-94c4-e2c93263797f,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:09.667078 containerd[2559]: time="2026-01-28T01:25:09.667048879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wlbng,Uid:d9af8dd1-e2bd-462a-8a21-d0c27cf0950b,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:09.783022 systemd-networkd[2200]: cali1677e3c1515: Link UP Jan 28 01:25:09.784164 systemd-networkd[2200]: cali1677e3c1515: Gained carrier Jan 28 01:25:09.796818 containerd[2559]: 2026-01-28 01:25:09.715 [INFO][5424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0 calico-apiserver-848997c984- calico-apiserver d99b8c9d-ad76-485a-94c4-e2c93263797f 803 0 2026-01-28 01:24:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:848997c984 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e calico-apiserver-848997c984-zqhg5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1677e3c1515 [] [] }} ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-" Jan 28 01:25:09.796818 containerd[2559]: 2026-01-28 01:25:09.715 [INFO][5424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" Jan 28 01:25:09.796818 containerd[2559]: 2026-01-28 01:25:09.745 [INFO][5450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" HandleID="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.745 [INFO][5450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" HandleID="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4593.0.0-n-2270f1152e", "pod":"calico-apiserver-848997c984-zqhg5", "timestamp":"2026-01-28 01:25:09.745126117 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.745 [INFO][5450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.745 [INFO][5450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.745 [INFO][5450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.752 [INFO][5450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.755 [INFO][5450] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.758 [INFO][5450] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.759 [INFO][5450] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797225 containerd[2559]: 2026-01-28 01:25:09.762 [INFO][5450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797576 containerd[2559]: 2026-01-28 01:25:09.762 [INFO][5450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797576 containerd[2559]: 2026-01-28 01:25:09.763 [INFO][5450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb Jan 28 01:25:09.797576 containerd[2559]: 2026-01-28 01:25:09.767 [INFO][5450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797576 containerd[2559]: 2026-01-28 01:25:09.773 [INFO][5450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.130/26] block=192.168.125.128/26 handle="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797576 containerd[2559]: 2026-01-28 01:25:09.773 [INFO][5450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.130/26] handle="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.797576 containerd[2559]: 2026-01-28 01:25:09.773 [INFO][5450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:09.797576 containerd[2559]: 2026-01-28 01:25:09.774 [INFO][5450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.130/26] IPv6=[] ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" HandleID="k8s-pod-network.8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" Jan 28 01:25:09.797952 containerd[2559]: 2026-01-28 01:25:09.776 [INFO][5424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0", GenerateName:"calico-apiserver-848997c984-", Namespace:"calico-apiserver", SelfLink:"", UID:"d99b8c9d-ad76-485a-94c4-e2c93263797f", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848997c984", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"calico-apiserver-848997c984-zqhg5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1677e3c1515", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:09.798812 containerd[2559]: 2026-01-28 01:25:09.776 [INFO][5424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.130/32] ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" Jan 28 01:25:09.798812 containerd[2559]: 2026-01-28 01:25:09.776 [INFO][5424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1677e3c1515 ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" Jan 28 01:25:09.798812 containerd[2559]: 2026-01-28 01:25:09.784 [INFO][5424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" Jan 28 01:25:09.798913 containerd[2559]: 2026-01-28 01:25:09.785 [INFO][5424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0", GenerateName:"calico-apiserver-848997c984-", Namespace:"calico-apiserver", SelfLink:"", UID:"d99b8c9d-ad76-485a-94c4-e2c93263797f", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848997c984", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb", Pod:"calico-apiserver-848997c984-zqhg5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1677e3c1515", MAC:"86:5d:48:1d:b6:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:09.799010 containerd[2559]: 2026-01-28 01:25:09.794 [INFO][5424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-zqhg5" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--zqhg5-eth0" Jan 28 01:25:09.806000 audit[5474]: NETFILTER_CFG table=filter:126 family=2 entries=50 op=nft_register_chain pid=5474 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:09.809206 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 28 01:25:09.809278 kernel: audit: type=1325 audit(1769563509.806:662): table=filter:126 family=2 entries=50 op=nft_register_chain pid=5474 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:09.806000 audit[5474]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fff43ca03a0 a2=0 a3=7fff43ca038c items=0 ppid=5220 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.813988 kernel: audit: type=1300 audit(1769563509.806:662): arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7fff43ca03a0 a2=0 a3=7fff43ca038c items=0 ppid=5220 pid=5474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.806000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:09.816793 kernel: audit: type=1327 audit(1769563509.806:662): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:09.833894 containerd[2559]: time="2026-01-28T01:25:09.833830373Z" level=info msg="connecting to shim 8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb" address="unix:///run/containerd/s/ad7f55b4f3903876750726554d48597c25001743592d7cc6f8b6240c271bedd9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:09.855293 systemd[1]: Started cri-containerd-8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb.scope - libcontainer container 8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb. Jan 28 01:25:09.870650 kernel: audit: type=1334 audit(1769563509.867:663): prog-id=235 op=LOAD Jan 28 01:25:09.870705 kernel: audit: type=1334 audit(1769563509.868:664): prog-id=236 op=LOAD Jan 28 01:25:09.867000 audit: BPF prog-id=235 op=LOAD Jan 28 01:25:09.868000 audit: BPF prog-id=236 op=LOAD Jan 28 01:25:09.868000 audit[5495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.874326 kernel: audit: type=1300 audit(1769563509.868:664): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.883213 kernel: audit: type=1327 audit(1769563509.868:664): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.868000 audit: BPF prog-id=236 op=UNLOAD Jan 28 01:25:09.885886 kernel: audit: type=1334 audit(1769563509.868:665): prog-id=236 op=UNLOAD Jan 28 01:25:09.868000 audit[5495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.899730 kernel: audit: type=1300 audit(1769563509.868:665): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.899781 kernel: audit: type=1327 audit(1769563509.868:665): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.898415 systemd-networkd[2200]: calib75e40a1906: Link UP Jan 28 01:25:09.899736 systemd-networkd[2200]: calib75e40a1906: Gained carrier Jan 28 01:25:09.868000 audit: BPF prog-id=237 op=LOAD Jan 28 01:25:09.868000 audit[5495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.868000 audit: BPF prog-id=238 op=LOAD Jan 28 01:25:09.868000 audit[5495]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.868000 audit: BPF prog-id=238 op=UNLOAD Jan 28 01:25:09.868000 audit[5495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.868000 audit: BPF prog-id=237 op=UNLOAD Jan 28 01:25:09.868000 audit[5495]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.868000 audit: BPF prog-id=239 op=LOAD Jan 28 01:25:09.868000 audit[5495]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5484 pid=5495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865326363386338653136366664316330366265643865303931326430 Jan 28 01:25:09.918087 containerd[2559]: 2026-01-28 01:25:09.724 [INFO][5435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0 csi-node-driver- calico-system d9af8dd1-e2bd-462a-8a21-d0c27cf0950b 698 0 2026-01-28 01:24:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e csi-node-driver-wlbng eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib75e40a1906 [] [] }} ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-" Jan 28 01:25:09.918087 containerd[2559]: 2026-01-28 01:25:09.724 [INFO][5435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" Jan 28 01:25:09.918087 containerd[2559]: 2026-01-28 01:25:09.753 [INFO][5456] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" HandleID="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Workload="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.754 [INFO][5456] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" HandleID="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Workload="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593.0.0-n-2270f1152e", "pod":"csi-node-driver-wlbng", "timestamp":"2026-01-28 01:25:09.753813945 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.754 [INFO][5456] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.773 [INFO][5456] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.774 [INFO][5456] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.853 [INFO][5456] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.856 [INFO][5456] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.862 [INFO][5456] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.863 [INFO][5456] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918242 containerd[2559]: 2026-01-28 01:25:09.865 [INFO][5456] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918438 containerd[2559]: 2026-01-28 01:25:09.865 [INFO][5456] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918438 containerd[2559]: 2026-01-28 01:25:09.866 [INFO][5456] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a Jan 28 01:25:09.918438 containerd[2559]: 2026-01-28 01:25:09.874 [INFO][5456] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918438 containerd[2559]: 2026-01-28 01:25:09.884 [INFO][5456] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.131/26] block=192.168.125.128/26 handle="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918438 containerd[2559]: 2026-01-28 01:25:09.884 [INFO][5456] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.131/26] handle="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:09.918438 containerd[2559]: 2026-01-28 01:25:09.884 [INFO][5456] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:09.918438 containerd[2559]: 2026-01-28 01:25:09.884 [INFO][5456] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.131/26] IPv6=[] ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" HandleID="k8s-pod-network.e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Workload="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" Jan 28 01:25:09.918579 containerd[2559]: 2026-01-28 01:25:09.893 [INFO][5435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9af8dd1-e2bd-462a-8a21-d0c27cf0950b", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"csi-node-driver-wlbng", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib75e40a1906", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:09.918632 containerd[2559]: 2026-01-28 01:25:09.893 [INFO][5435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.131/32] ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" Jan 28 01:25:09.918632 containerd[2559]: 2026-01-28 01:25:09.893 [INFO][5435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib75e40a1906 ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" Jan 28 01:25:09.918632 containerd[2559]: 2026-01-28 01:25:09.901 [INFO][5435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" Jan 28 01:25:09.918698 containerd[2559]: 2026-01-28 01:25:09.901 [INFO][5435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9af8dd1-e2bd-462a-8a21-d0c27cf0950b", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a", Pod:"csi-node-driver-wlbng", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib75e40a1906", MAC:"22:d7:f4:21:9d:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:09.918750 containerd[2559]: 2026-01-28 01:25:09.914 [INFO][5435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" Namespace="calico-system" Pod="csi-node-driver-wlbng" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-csi--node--driver--wlbng-eth0" Jan 28 01:25:09.932000 audit[5526]: NETFILTER_CFG table=filter:127 family=2 entries=46 op=nft_register_chain pid=5526 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:09.932000 audit[5526]: SYSCALL arch=c000003e syscall=46 success=yes exit=23616 a0=3 a1=7fff33a57330 a2=0 a3=7fff33a5731c items=0 ppid=5220 pid=5526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.932000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:09.933448 containerd[2559]: time="2026-01-28T01:25:09.933398669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-zqhg5,Uid:d99b8c9d-ad76-485a-94c4-e2c93263797f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e2cc8c8e166fd1c06bed8e0912d051ac25a4a3b0524dcb994dc9fcd00c545bb\"" Jan 28 01:25:09.934748 containerd[2559]: time="2026-01-28T01:25:09.934732201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:25:09.958693 containerd[2559]: time="2026-01-28T01:25:09.958198970Z" level=info msg="connecting to shim e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a" address="unix:///run/containerd/s/6da5755fa770748bbf8c95109ed48679c3173dbca8c0d476b5af785f0d858803" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:09.975009 systemd[1]: Started cri-containerd-e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a.scope - libcontainer container e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a. Jan 28 01:25:09.980000 audit: BPF prog-id=240 op=LOAD Jan 28 01:25:09.980000 audit: BPF prog-id=241 op=LOAD Jan 28 01:25:09.980000 audit[5547]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5534 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535336531656135616238643736303365323832326262333462643563 Jan 28 01:25:09.980000 audit: BPF prog-id=241 op=UNLOAD Jan 28 01:25:09.980000 audit[5547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5534 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535336531656135616238643736303365323832326262333462643563 Jan 28 01:25:09.981000 audit: BPF prog-id=242 op=LOAD Jan 28 01:25:09.981000 audit[5547]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5534 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535336531656135616238643736303365323832326262333462643563 Jan 28 01:25:09.981000 audit: BPF prog-id=243 op=LOAD Jan 28 01:25:09.981000 audit[5547]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5534 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535336531656135616238643736303365323832326262333462643563 Jan 28 01:25:09.981000 audit: BPF prog-id=243 op=UNLOAD Jan 28 01:25:09.981000 audit[5547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5534 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535336531656135616238643736303365323832326262333462643563 Jan 28 01:25:09.981000 audit: BPF prog-id=242 op=UNLOAD Jan 28 01:25:09.981000 audit[5547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5534 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535336531656135616238643736303365323832326262333462643563 Jan 28 01:25:09.981000 audit: BPF prog-id=244 op=LOAD Jan 28 01:25:09.981000 audit[5547]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5534 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:09.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535336531656135616238643736303365323832326262333462643563 Jan 28 01:25:09.995251 containerd[2559]: time="2026-01-28T01:25:09.995198232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wlbng,Uid:d9af8dd1-e2bd-462a-8a21-d0c27cf0950b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e53e1ea5ab8d7603e2822bb34bd5c3c5a00bfa68750f11c44c5924ae3dd3690a\"" Jan 28 01:25:10.180309 containerd[2559]: time="2026-01-28T01:25:10.180153608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:10.182647 containerd[2559]: time="2026-01-28T01:25:10.182607585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:25:10.182711 containerd[2559]: time="2026-01-28T01:25:10.182671640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:10.182839 kubelet[4040]: E0128 01:25:10.182806 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:10.183134 kubelet[4040]: E0128 01:25:10.182842 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:10.183134 kubelet[4040]: E0128 01:25:10.183010 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-zqhg5_calico-apiserver(d99b8c9d-ad76-485a-94c4-e2c93263797f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:10.183134 kubelet[4040]: E0128 01:25:10.183057 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:25:10.183576 containerd[2559]: time="2026-01-28T01:25:10.183546400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:25:10.432872 containerd[2559]: time="2026-01-28T01:25:10.432776377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:10.439197 containerd[2559]: time="2026-01-28T01:25:10.439170245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:25:10.439255 containerd[2559]: time="2026-01-28T01:25:10.439230684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:10.439425 kubelet[4040]: E0128 01:25:10.439388 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:25:10.439465 kubelet[4040]: E0128 01:25:10.439423 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:25:10.439495 kubelet[4040]: E0128 01:25:10.439485 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:10.440335 containerd[2559]: time="2026-01-28T01:25:10.440303368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:25:10.687308 containerd[2559]: time="2026-01-28T01:25:10.687239744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:10.689717 containerd[2559]: time="2026-01-28T01:25:10.689693862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:25:10.689770 containerd[2559]: time="2026-01-28T01:25:10.689756924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:10.689917 kubelet[4040]: E0128 01:25:10.689889 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:25:10.689985 kubelet[4040]: E0128 01:25:10.689921 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:25:10.690010 kubelet[4040]: E0128 01:25:10.689997 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:10.690073 kubelet[4040]: E0128 01:25:10.690032 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:25:10.789333 kubelet[4040]: E0128 01:25:10.789286 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:25:10.790584 kubelet[4040]: E0128 01:25:10.790531 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:25:10.822187 systemd-networkd[2200]: cali1677e3c1515: Gained IPv6LL Jan 28 01:25:10.825000 audit[5579]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:10.825000 audit[5579]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff07df8170 a2=0 a3=7fff07df815c items=0 ppid=4194 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:10.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:10.828000 audit[5579]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:10.828000 audit[5579]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff07df8170 a2=0 a3=0 items=0 ppid=4194 pid=5579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:10.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:11.526098 systemd-networkd[2200]: calib75e40a1906: Gained IPv6LL Jan 28 01:25:11.662251 containerd[2559]: time="2026-01-28T01:25:11.662063632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7z2xz,Uid:24495090-d436-41d1-b444-e32503155f3d,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:11.747506 systemd-networkd[2200]: cali1c20b6849ca: Link UP Jan 28 01:25:11.748329 systemd-networkd[2200]: cali1c20b6849ca: Gained carrier Jan 28 01:25:11.760886 containerd[2559]: 2026-01-28 01:25:11.694 [INFO][5581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0 coredns-66bc5c9577- kube-system 24495090-d436-41d1-b444-e32503155f3d 799 0 2026-01-28 01:24:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e coredns-66bc5c9577-7z2xz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1c20b6849ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-" Jan 28 01:25:11.760886 containerd[2559]: 2026-01-28 01:25:11.694 [INFO][5581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" Jan 28 01:25:11.760886 containerd[2559]: 2026-01-28 01:25:11.714 [INFO][5593] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" HandleID="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Workload="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.714 [INFO][5593] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" HandleID="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Workload="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4593.0.0-n-2270f1152e", "pod":"coredns-66bc5c9577-7z2xz", "timestamp":"2026-01-28 01:25:11.714420897 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.714 [INFO][5593] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.714 [INFO][5593] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.714 [INFO][5593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.720 [INFO][5593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.723 [INFO][5593] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.727 [INFO][5593] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.729 [INFO][5593] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761247 containerd[2559]: 2026-01-28 01:25:11.732 [INFO][5593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761443 containerd[2559]: 2026-01-28 01:25:11.732 [INFO][5593] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761443 containerd[2559]: 2026-01-28 01:25:11.733 [INFO][5593] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c Jan 28 01:25:11.761443 containerd[2559]: 2026-01-28 01:25:11.737 [INFO][5593] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761443 containerd[2559]: 2026-01-28 01:25:11.743 [INFO][5593] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.132/26] block=192.168.125.128/26 handle="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761443 containerd[2559]: 2026-01-28 01:25:11.743 [INFO][5593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.132/26] handle="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:11.761443 containerd[2559]: 2026-01-28 01:25:11.743 [INFO][5593] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:11.761443 containerd[2559]: 2026-01-28 01:25:11.744 [INFO][5593] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.132/26] IPv6=[] ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" HandleID="k8s-pod-network.dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Workload="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" Jan 28 01:25:11.761583 containerd[2559]: 2026-01-28 01:25:11.745 [INFO][5581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"24495090-d436-41d1-b444-e32503155f3d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"coredns-66bc5c9577-7z2xz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c20b6849ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:11.761583 containerd[2559]: 2026-01-28 01:25:11.745 [INFO][5581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.132/32] ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" Jan 28 01:25:11.761583 containerd[2559]: 2026-01-28 01:25:11.745 [INFO][5581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c20b6849ca ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" Jan 28 01:25:11.761583 containerd[2559]: 2026-01-28 01:25:11.748 [INFO][5581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" Jan 28 01:25:11.761583 containerd[2559]: 2026-01-28 01:25:11.749 [INFO][5581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"24495090-d436-41d1-b444-e32503155f3d", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c", Pod:"coredns-66bc5c9577-7z2xz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c20b6849ca", MAC:"f6:58:a3:e2:8d:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:11.761781 containerd[2559]: 2026-01-28 01:25:11.758 [INFO][5581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" Namespace="kube-system" Pod="coredns-66bc5c9577-7z2xz" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--7z2xz-eth0" Jan 28 01:25:11.775000 audit[5608]: NETFILTER_CFG table=filter:130 family=2 entries=52 op=nft_register_chain pid=5608 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:11.775000 audit[5608]: SYSCALL arch=c000003e syscall=46 success=yes exit=26576 a0=3 a1=7ffcdec61af0 a2=0 a3=7ffcdec61adc items=0 ppid=5220 pid=5608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.775000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:11.796686 kubelet[4040]: E0128 01:25:11.796227 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:25:11.798061 kubelet[4040]: E0128 01:25:11.798008 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:25:11.802545 containerd[2559]: time="2026-01-28T01:25:11.802508979Z" level=info msg="connecting to shim dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c" address="unix:///run/containerd/s/f3085c3e14191566e04dd0e11ad36e0c0b9eb9ce9f0fabda0c76a7be5e6218c6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:11.838074 systemd[1]: Started cri-containerd-dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c.scope - libcontainer container dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c. Jan 28 01:25:11.846000 audit: BPF prog-id=245 op=LOAD Jan 28 01:25:11.846000 audit: BPF prog-id=246 op=LOAD Jan 28 01:25:11.846000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5618 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616336393164636261393563633635333338373035613966626437 Jan 28 01:25:11.847000 audit: BPF prog-id=246 op=UNLOAD Jan 28 01:25:11.847000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5618 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616336393164636261393563633635333338373035613966626437 Jan 28 01:25:11.847000 audit: BPF prog-id=247 op=LOAD Jan 28 01:25:11.847000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5618 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616336393164636261393563633635333338373035613966626437 Jan 28 01:25:11.847000 audit: BPF prog-id=248 op=LOAD Jan 28 01:25:11.847000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5618 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616336393164636261393563633635333338373035613966626437 Jan 28 01:25:11.847000 audit: BPF prog-id=248 op=UNLOAD Jan 28 01:25:11.847000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5618 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616336393164636261393563633635333338373035613966626437 Jan 28 01:25:11.847000 audit: BPF prog-id=247 op=UNLOAD Jan 28 01:25:11.847000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5618 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616336393164636261393563633635333338373035613966626437 Jan 28 01:25:11.847000 audit: BPF prog-id=249 op=LOAD Jan 28 01:25:11.847000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5618 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462616336393164636261393563633635333338373035613966626437 Jan 28 01:25:11.876491 containerd[2559]: time="2026-01-28T01:25:11.876467720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7z2xz,Uid:24495090-d436-41d1-b444-e32503155f3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c\"" Jan 28 01:25:11.885760 containerd[2559]: time="2026-01-28T01:25:11.885736828Z" level=info msg="CreateContainer within sandbox \"dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:25:11.903369 containerd[2559]: time="2026-01-28T01:25:11.902992368Z" level=info msg="Container 713f28285be87927723dfad060010d572f3fe15d771113081645969769b45228: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:25:11.914598 containerd[2559]: time="2026-01-28T01:25:11.914574567Z" level=info msg="CreateContainer within sandbox \"dbac691dcba95cc65338705a9fbd7b931c4ef4f50b86918fd3b3ab7d69d81c2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"713f28285be87927723dfad060010d572f3fe15d771113081645969769b45228\"" Jan 28 01:25:11.915094 containerd[2559]: time="2026-01-28T01:25:11.915072818Z" level=info msg="StartContainer for \"713f28285be87927723dfad060010d572f3fe15d771113081645969769b45228\"" Jan 28 01:25:11.916542 containerd[2559]: time="2026-01-28T01:25:11.916495218Z" level=info msg="connecting to shim 713f28285be87927723dfad060010d572f3fe15d771113081645969769b45228" address="unix:///run/containerd/s/f3085c3e14191566e04dd0e11ad36e0c0b9eb9ce9f0fabda0c76a7be5e6218c6" protocol=ttrpc version=3 Jan 28 01:25:11.934004 systemd[1]: Started cri-containerd-713f28285be87927723dfad060010d572f3fe15d771113081645969769b45228.scope - libcontainer container 713f28285be87927723dfad060010d572f3fe15d771113081645969769b45228. Jan 28 01:25:11.940000 audit: BPF prog-id=250 op=LOAD Jan 28 01:25:11.940000 audit: BPF prog-id=251 op=LOAD Jan 28 01:25:11.940000 audit[5655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5618 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731336632383238356265383739323737323364666164303630303130 Jan 28 01:25:11.940000 audit: BPF prog-id=251 op=UNLOAD Jan 28 01:25:11.940000 audit[5655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5618 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731336632383238356265383739323737323364666164303630303130 Jan 28 01:25:11.940000 audit: BPF prog-id=252 op=LOAD Jan 28 01:25:11.940000 audit[5655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5618 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731336632383238356265383739323737323364666164303630303130 Jan 28 01:25:11.940000 audit: BPF prog-id=253 op=LOAD Jan 28 01:25:11.940000 audit[5655]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5618 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731336632383238356265383739323737323364666164303630303130 Jan 28 01:25:11.940000 audit: BPF prog-id=253 op=UNLOAD Jan 28 01:25:11.940000 audit[5655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5618 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731336632383238356265383739323737323364666164303630303130 Jan 28 01:25:11.940000 audit: BPF prog-id=252 op=UNLOAD Jan 28 01:25:11.940000 audit[5655]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5618 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731336632383238356265383739323737323364666164303630303130 Jan 28 01:25:11.940000 audit: BPF prog-id=254 op=LOAD Jan 28 01:25:11.940000 audit[5655]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5618 pid=5655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:11.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731336632383238356265383739323737323364666164303630303130 Jan 28 01:25:11.956102 containerd[2559]: time="2026-01-28T01:25:11.956052842Z" level=info msg="StartContainer for \"713f28285be87927723dfad060010d572f3fe15d771113081645969769b45228\" returns successfully" Jan 28 01:25:12.660992 containerd[2559]: time="2026-01-28T01:25:12.660938354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zl6z4,Uid:82792313-f307-46ab-a25e-04cde981d984,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:12.741800 systemd-networkd[2200]: cali4eca33595c8: Link UP Jan 28 01:25:12.742564 systemd-networkd[2200]: cali4eca33595c8: Gained carrier Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.693 [INFO][5688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0 goldmane-7c778bb748- calico-system 82792313-f307-46ab-a25e-04cde981d984 805 0 2026-01-28 01:24:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e goldmane-7c778bb748-zl6z4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4eca33595c8 [] [] }} ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.693 [INFO][5688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.713 [INFO][5700] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" HandleID="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Workload="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.713 [INFO][5700] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" HandleID="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Workload="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593.0.0-n-2270f1152e", "pod":"goldmane-7c778bb748-zl6z4", "timestamp":"2026-01-28 01:25:12.713233492 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.713 [INFO][5700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.713 [INFO][5700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.713 [INFO][5700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.717 [INFO][5700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.720 [INFO][5700] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.723 [INFO][5700] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.724 [INFO][5700] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.726 [INFO][5700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.726 [INFO][5700] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.728 [INFO][5700] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4 Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.734 [INFO][5700] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.738 [INFO][5700] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.133/26] block=192.168.125.128/26 handle="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.738 [INFO][5700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.133/26] handle="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.738 [INFO][5700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:12.758039 containerd[2559]: 2026-01-28 01:25:12.738 [INFO][5700] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.133/26] IPv6=[] ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" HandleID="k8s-pod-network.dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Workload="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" Jan 28 01:25:12.758613 containerd[2559]: 2026-01-28 01:25:12.739 [INFO][5688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"82792313-f307-46ab-a25e-04cde981d984", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"goldmane-7c778bb748-zl6z4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4eca33595c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:12.758613 containerd[2559]: 2026-01-28 01:25:12.739 [INFO][5688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.133/32] ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" Jan 28 01:25:12.758613 containerd[2559]: 2026-01-28 01:25:12.739 [INFO][5688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4eca33595c8 ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" Jan 28 01:25:12.758613 containerd[2559]: 2026-01-28 01:25:12.742 [INFO][5688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" Jan 28 01:25:12.758613 containerd[2559]: 2026-01-28 01:25:12.743 [INFO][5688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"82792313-f307-46ab-a25e-04cde981d984", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4", Pod:"goldmane-7c778bb748-zl6z4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4eca33595c8", MAC:"ea:97:3e:66:5c:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:12.758613 containerd[2559]: 2026-01-28 01:25:12.752 [INFO][5688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" Namespace="calico-system" Pod="goldmane-7c778bb748-zl6z4" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-goldmane--7c778bb748--zl6z4-eth0" Jan 28 01:25:12.774000 audit[5716]: NETFILTER_CFG table=filter:131 family=2 entries=48 op=nft_register_chain pid=5716 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:12.774000 audit[5716]: SYSCALL arch=c000003e syscall=46 success=yes exit=26336 a0=3 a1=7ffc82cb2fc0 a2=0 a3=7ffc82cb2fac items=0 ppid=5220 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.774000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:12.794063 containerd[2559]: time="2026-01-28T01:25:12.793842758Z" level=info msg="connecting to shim dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4" address="unix:///run/containerd/s/b4823eab5e12afda3011c0fe9c262bd4e5f2c60b0e6a3ad3ab5c7700cd9261a0" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:12.830201 systemd[1]: Started cri-containerd-dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4.scope - libcontainer container dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4. Jan 28 01:25:12.835188 kubelet[4040]: I0128 01:25:12.835135 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7z2xz" podStartSLOduration=37.835119384 podStartE2EDuration="37.835119384s" podCreationTimestamp="2026-01-28 01:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:25:12.817828487 +0000 UTC m=+44.260920370" watchObservedRunningTime="2026-01-28 01:25:12.835119384 +0000 UTC m=+44.278211266" Jan 28 01:25:12.847000 audit[5750]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:12.847000 audit[5750]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde3ecd7b0 a2=0 a3=7ffde3ecd79c items=0 ppid=4194 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:12.851000 audit[5750]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5750 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:12.851000 audit[5750]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffde3ecd7b0 a2=0 a3=0 items=0 ppid=4194 pid=5750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:12.854000 audit: BPF prog-id=255 op=LOAD Jan 28 01:25:12.854000 audit: BPF prog-id=256 op=LOAD Jan 28 01:25:12.854000 audit[5736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5724 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623837646366633665356465666630376135323935366661326134 Jan 28 01:25:12.854000 audit: BPF prog-id=256 op=UNLOAD Jan 28 01:25:12.854000 audit[5736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5724 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623837646366633665356465666630376135323935366661326134 Jan 28 01:25:12.855000 audit: BPF prog-id=257 op=LOAD Jan 28 01:25:12.855000 audit[5736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5724 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623837646366633665356465666630376135323935366661326134 Jan 28 01:25:12.855000 audit: BPF prog-id=258 op=LOAD Jan 28 01:25:12.855000 audit[5736]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5724 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623837646366633665356465666630376135323935366661326134 Jan 28 01:25:12.855000 audit: BPF prog-id=258 op=UNLOAD Jan 28 01:25:12.855000 audit[5736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5724 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623837646366633665356465666630376135323935366661326134 Jan 28 01:25:12.855000 audit: BPF prog-id=257 op=UNLOAD Jan 28 01:25:12.855000 audit[5736]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5724 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623837646366633665356465666630376135323935366661326134 Jan 28 01:25:12.855000 audit: BPF prog-id=259 op=LOAD Jan 28 01:25:12.855000 audit[5736]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5724 pid=5736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:12.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462623837646366633665356465666630376135323935366661326134 Jan 28 01:25:12.887493 containerd[2559]: time="2026-01-28T01:25:12.887459310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zl6z4,Uid:82792313-f307-46ab-a25e-04cde981d984,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbb87dcfc6e5deff07a52956fa2a4e93bd0d514d6300c0a5b9f25a804564bca4\"" Jan 28 01:25:12.888597 containerd[2559]: time="2026-01-28T01:25:12.888486683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:25:12.934515 systemd-networkd[2200]: cali1c20b6849ca: Gained IPv6LL Jan 28 01:25:13.168254 containerd[2559]: time="2026-01-28T01:25:13.168217291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:13.170581 containerd[2559]: time="2026-01-28T01:25:13.170558777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:25:13.170632 containerd[2559]: time="2026-01-28T01:25:13.170574573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:13.170800 kubelet[4040]: E0128 01:25:13.170772 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:25:13.170874 kubelet[4040]: E0128 01:25:13.170806 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:25:13.170905 kubelet[4040]: E0128 01:25:13.170891 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zl6z4_calico-system(82792313-f307-46ab-a25e-04cde981d984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:13.170980 kubelet[4040]: E0128 01:25:13.170944 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:25:13.663196 containerd[2559]: time="2026-01-28T01:25:13.663166098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dj8xj,Uid:abcd1edf-2334-4947-82f0-ae69c3925ca7,Namespace:kube-system,Attempt:0,}" Jan 28 01:25:13.668261 containerd[2559]: time="2026-01-28T01:25:13.668238281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9c776fd4-lg5zf,Uid:846cab91-e0a1-4344-ab2b-9358f550d758,Namespace:calico-system,Attempt:0,}" Jan 28 01:25:13.672666 containerd[2559]: time="2026-01-28T01:25:13.672612317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-fdfk6,Uid:f2bad97b-be33-4a5d-908e-d2048d5b9f4f,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:25:13.804281 systemd-networkd[2200]: cali5912826e880: Link UP Jan 28 01:25:13.804957 systemd-networkd[2200]: cali5912826e880: Gained carrier Jan 28 01:25:13.806043 kubelet[4040]: E0128 01:25:13.805231 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.728 [INFO][5766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0 coredns-66bc5c9577- kube-system abcd1edf-2334-4947-82f0-ae69c3925ca7 800 0 2026-01-28 01:24:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e coredns-66bc5c9577-dj8xj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5912826e880 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.728 [INFO][5766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.763 [INFO][5805] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" HandleID="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Workload="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.763 [INFO][5805] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" HandleID="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Workload="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4593.0.0-n-2270f1152e", "pod":"coredns-66bc5c9577-dj8xj", "timestamp":"2026-01-28 01:25:13.763493826 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.764 [INFO][5805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.764 [INFO][5805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.764 [INFO][5805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.770 [INFO][5805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.775 [INFO][5805] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.781 [INFO][5805] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.783 [INFO][5805] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.785 [INFO][5805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.785 [INFO][5805] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.786 [INFO][5805] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9 Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.790 [INFO][5805] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.796 [INFO][5805] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.134/26] block=192.168.125.128/26 handle="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.797 [INFO][5805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.134/26] handle="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.797 [INFO][5805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:13.820129 containerd[2559]: 2026-01-28 01:25:13.797 [INFO][5805] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.134/26] IPv6=[] ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" HandleID="k8s-pod-network.447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Workload="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" Jan 28 01:25:13.822010 containerd[2559]: 2026-01-28 01:25:13.799 [INFO][5766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"abcd1edf-2334-4947-82f0-ae69c3925ca7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"coredns-66bc5c9577-dj8xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5912826e880", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:13.822010 containerd[2559]: 2026-01-28 01:25:13.799 [INFO][5766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.134/32] ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" Jan 28 01:25:13.822010 containerd[2559]: 2026-01-28 01:25:13.800 [INFO][5766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5912826e880 ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" Jan 28 01:25:13.822010 containerd[2559]: 2026-01-28 01:25:13.801 [INFO][5766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" Jan 28 01:25:13.822010 containerd[2559]: 2026-01-28 01:25:13.802 [INFO][5766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"abcd1edf-2334-4947-82f0-ae69c3925ca7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9", Pod:"coredns-66bc5c9577-dj8xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5912826e880", MAC:"1a:59:94:02:16:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:13.822204 containerd[2559]: 2026-01-28 01:25:13.817 [INFO][5766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" Namespace="kube-system" Pod="coredns-66bc5c9577-dj8xj" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-coredns--66bc5c9577--dj8xj-eth0" Jan 28 01:25:13.836000 audit[5834]: NETFILTER_CFG table=filter:134 family=2 entries=40 op=nft_register_chain pid=5834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:13.836000 audit[5834]: SYSCALL arch=c000003e syscall=46 success=yes exit=20312 a0=3 a1=7ffda9789a40 a2=0 a3=7ffda9789a2c items=0 ppid=5220 pid=5834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.836000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:13.865923 containerd[2559]: time="2026-01-28T01:25:13.865833567Z" level=info msg="connecting to shim 447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9" address="unix:///run/containerd/s/93c660ccbdb807b420b4e37503d3258ffa8cf09592078da1062a047c125790bd" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:13.876000 audit[5856]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:13.876000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffe0a9aa30 a2=0 a3=7fffe0a9aa1c items=0 ppid=4194 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:13.881000 audit[5856]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:13.881000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fffe0a9aa30 a2=0 a3=7fffe0a9aa1c items=0 ppid=4194 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:13.896082 systemd[1]: Started cri-containerd-447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9.scope - libcontainer container 447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9. Jan 28 01:25:13.907000 audit: BPF prog-id=260 op=LOAD Jan 28 01:25:13.908000 audit: BPF prog-id=261 op=LOAD Jan 28 01:25:13.908000 audit[5857]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5844 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376334653336613334383933316564343864613339386634313334 Jan 28 01:25:13.908000 audit: BPF prog-id=261 op=UNLOAD Jan 28 01:25:13.908000 audit[5857]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5844 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376334653336613334383933316564343864613339386634313334 Jan 28 01:25:13.908000 audit: BPF prog-id=262 op=LOAD Jan 28 01:25:13.908000 audit[5857]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5844 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376334653336613334383933316564343864613339386634313334 Jan 28 01:25:13.908000 audit: BPF prog-id=263 op=LOAD Jan 28 01:25:13.908000 audit[5857]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5844 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376334653336613334383933316564343864613339386634313334 Jan 28 01:25:13.908000 audit: BPF prog-id=263 op=UNLOAD Jan 28 01:25:13.908000 audit[5857]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5844 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376334653336613334383933316564343864613339386634313334 Jan 28 01:25:13.908000 audit: BPF prog-id=262 op=UNLOAD Jan 28 01:25:13.908000 audit[5857]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5844 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376334653336613334383933316564343864613339386634313334 Jan 28 01:25:13.908000 audit: BPF prog-id=264 op=LOAD Jan 28 01:25:13.908000 audit[5857]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5844 pid=5857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3434376334653336613334383933316564343864613339386634313334 Jan 28 01:25:13.926190 systemd-networkd[2200]: cali98d912e8bba: Link UP Jan 28 01:25:13.927528 systemd-networkd[2200]: cali98d912e8bba: Gained carrier Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.737 [INFO][5778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0 calico-kube-controllers-d9c776fd4- calico-system 846cab91-e0a1-4344-ab2b-9358f550d758 801 0 2026-01-28 01:24:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d9c776fd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e calico-kube-controllers-d9c776fd4-lg5zf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali98d912e8bba [] [] }} ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.737 [INFO][5778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.777 [INFO][5811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" HandleID="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.778 [INFO][5811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" HandleID="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593.0.0-n-2270f1152e", "pod":"calico-kube-controllers-d9c776fd4-lg5zf", "timestamp":"2026-01-28 01:25:13.777803465 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.778 [INFO][5811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.797 [INFO][5811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.797 [INFO][5811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.871 [INFO][5811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.879 [INFO][5811] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.884 [INFO][5811] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.887 [INFO][5811] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.895 [INFO][5811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.895 [INFO][5811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.898 [INFO][5811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992 Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.906 [INFO][5811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.917 [INFO][5811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.135/26] block=192.168.125.128/26 handle="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.917 [INFO][5811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.135/26] handle="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.917 [INFO][5811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:13.948798 containerd[2559]: 2026-01-28 01:25:13.917 [INFO][5811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.135/26] IPv6=[] ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" HandleID="k8s-pod-network.8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" Jan 28 01:25:13.949336 containerd[2559]: 2026-01-28 01:25:13.921 [INFO][5778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0", GenerateName:"calico-kube-controllers-d9c776fd4-", Namespace:"calico-system", SelfLink:"", UID:"846cab91-e0a1-4344-ab2b-9358f550d758", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9c776fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"calico-kube-controllers-d9c776fd4-lg5zf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98d912e8bba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:13.949336 containerd[2559]: 2026-01-28 01:25:13.921 [INFO][5778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.135/32] ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" Jan 28 01:25:13.949336 containerd[2559]: 2026-01-28 01:25:13.921 [INFO][5778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98d912e8bba ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" Jan 28 01:25:13.949336 containerd[2559]: 2026-01-28 01:25:13.926 [INFO][5778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" Jan 28 01:25:13.949336 containerd[2559]: 2026-01-28 01:25:13.928 [INFO][5778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0", GenerateName:"calico-kube-controllers-d9c776fd4-", Namespace:"calico-system", SelfLink:"", UID:"846cab91-e0a1-4344-ab2b-9358f550d758", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9c776fd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992", Pod:"calico-kube-controllers-d9c776fd4-lg5zf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98d912e8bba", MAC:"f6:63:84:ab:c0:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:13.949336 containerd[2559]: 2026-01-28 01:25:13.944 [INFO][5778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" Namespace="calico-system" Pod="calico-kube-controllers-d9c776fd4-lg5zf" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--kube--controllers--d9c776fd4--lg5zf-eth0" Jan 28 01:25:13.955375 containerd[2559]: time="2026-01-28T01:25:13.955339943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dj8xj,Uid:abcd1edf-2334-4947-82f0-ae69c3925ca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9\"" Jan 28 01:25:13.962000 audit[5894]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=5894 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:13.962000 audit[5894]: SYSCALL arch=c000003e syscall=46 success=yes exit=23108 a0=3 a1=7ffc965a7c90 a2=0 a3=7ffc965a7c7c items=0 ppid=5220 pid=5894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:13.962000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:13.966523 containerd[2559]: time="2026-01-28T01:25:13.966495061Z" level=info msg="CreateContainer within sandbox \"447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:25:13.984712 containerd[2559]: time="2026-01-28T01:25:13.984669015Z" level=info msg="Container e25898aee4f7b52587b20f07538b9352105b3190957d4c88c1467d1829775234: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:25:13.990367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852547033.mount: Deactivated successfully. Jan 28 01:25:14.002333 containerd[2559]: time="2026-01-28T01:25:14.002304331Z" level=info msg="CreateContainer within sandbox \"447c4e36a348931ed48da398f413466613c116dc6b0307c4107a6b93482141f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e25898aee4f7b52587b20f07538b9352105b3190957d4c88c1467d1829775234\"" Jan 28 01:25:14.002872 containerd[2559]: time="2026-01-28T01:25:14.002791444Z" level=info msg="StartContainer for \"e25898aee4f7b52587b20f07538b9352105b3190957d4c88c1467d1829775234\"" Jan 28 01:25:14.006588 containerd[2559]: time="2026-01-28T01:25:14.006543017Z" level=info msg="connecting to shim e25898aee4f7b52587b20f07538b9352105b3190957d4c88c1467d1829775234" address="unix:///run/containerd/s/93c660ccbdb807b420b4e37503d3258ffa8cf09592078da1062a047c125790bd" protocol=ttrpc version=3 Jan 28 01:25:14.020706 systemd-networkd[2200]: califfcc29c6e5d: Link UP Jan 28 01:25:14.022323 systemd-networkd[2200]: califfcc29c6e5d: Gained carrier Jan 28 01:25:14.033650 containerd[2559]: time="2026-01-28T01:25:14.033626915Z" level=info msg="connecting to shim 8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992" address="unix:///run/containerd/s/f888b8e734263704eda92f3261ad5b96f209f2b03e0fc88e6429a3c9c09f90c5" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.750 [INFO][5791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0 calico-apiserver-848997c984- calico-apiserver f2bad97b-be33-4a5d-908e-d2048d5b9f4f 802 0 2026-01-28 01:24:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:848997c984 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4593.0.0-n-2270f1152e calico-apiserver-848997c984-fdfk6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califfcc29c6e5d [] [] }} ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.752 [INFO][5791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.786 [INFO][5816] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" HandleID="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.786 [INFO][5816] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" HandleID="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4593.0.0-n-2270f1152e", "pod":"calico-apiserver-848997c984-fdfk6", "timestamp":"2026-01-28 01:25:13.786741239 +0000 UTC"}, Hostname:"ci-4593.0.0-n-2270f1152e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.786 [INFO][5816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.918 [INFO][5816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.918 [INFO][5816] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593.0.0-n-2270f1152e' Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.971 [INFO][5816] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.979 [INFO][5816] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.986 [INFO][5816] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.989 [INFO][5816] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.992 [INFO][5816] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.993 [INFO][5816] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:13.996 [INFO][5816] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:14.000 [INFO][5816] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:14.014 [INFO][5816] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.136/26] block=192.168.125.128/26 handle="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:14.014 [INFO][5816] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.136/26] handle="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" host="ci-4593.0.0-n-2270f1152e" Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:14.014 [INFO][5816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:25:14.048950 containerd[2559]: 2026-01-28 01:25:14.014 [INFO][5816] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.136/26] IPv6=[] ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" HandleID="k8s-pod-network.4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Workload="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" Jan 28 01:25:14.050252 containerd[2559]: 2026-01-28 01:25:14.016 [INFO][5791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0", GenerateName:"calico-apiserver-848997c984-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2bad97b-be33-4a5d-908e-d2048d5b9f4f", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848997c984", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"", Pod:"calico-apiserver-848997c984-fdfk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califfcc29c6e5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:14.050252 containerd[2559]: 2026-01-28 01:25:14.016 [INFO][5791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.136/32] ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" Jan 28 01:25:14.050252 containerd[2559]: 2026-01-28 01:25:14.016 [INFO][5791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfcc29c6e5d ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" Jan 28 01:25:14.050252 containerd[2559]: 2026-01-28 01:25:14.024 [INFO][5791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" Jan 28 01:25:14.050252 containerd[2559]: 2026-01-28 01:25:14.024 [INFO][5791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0", GenerateName:"calico-apiserver-848997c984-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2bad97b-be33-4a5d-908e-d2048d5b9f4f", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 24, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"848997c984", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593.0.0-n-2270f1152e", ContainerID:"4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac", Pod:"calico-apiserver-848997c984-fdfk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califfcc29c6e5d", MAC:"4e:ca:7a:1f:ae:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:25:14.050252 containerd[2559]: 2026-01-28 01:25:14.042 [INFO][5791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" Namespace="calico-apiserver" Pod="calico-apiserver-848997c984-fdfk6" WorkloadEndpoint="ci--4593.0.0--n--2270f1152e-k8s-calico--apiserver--848997c984--fdfk6-eth0" Jan 28 01:25:14.053053 systemd[1]: Started cri-containerd-e25898aee4f7b52587b20f07538b9352105b3190957d4c88c1467d1829775234.scope - libcontainer container e25898aee4f7b52587b20f07538b9352105b3190957d4c88c1467d1829775234. Jan 28 01:25:14.068000 audit: BPF prog-id=265 op=LOAD Jan 28 01:25:14.068000 audit: BPF prog-id=266 op=LOAD Jan 28 01:25:14.068000 audit[5898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5844 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.068000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532353839386165653466376235323538376232306630373533386239 Jan 28 01:25:14.069000 audit: BPF prog-id=266 op=UNLOAD Jan 28 01:25:14.069000 audit[5898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5844 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532353839386165653466376235323538376232306630373533386239 Jan 28 01:25:14.069000 audit: BPF prog-id=267 op=LOAD Jan 28 01:25:14.069000 audit[5898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5844 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532353839386165653466376235323538376232306630373533386239 Jan 28 01:25:14.070000 audit: BPF prog-id=268 op=LOAD Jan 28 01:25:14.070000 audit[5898]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5844 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532353839386165653466376235323538376232306630373533386239 Jan 28 01:25:14.070000 audit: BPF prog-id=268 op=UNLOAD Jan 28 01:25:14.070000 audit[5898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5844 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532353839386165653466376235323538376232306630373533386239 Jan 28 01:25:14.070000 audit: BPF prog-id=267 op=UNLOAD Jan 28 01:25:14.070000 audit[5898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5844 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532353839386165653466376235323538376232306630373533386239 Jan 28 01:25:14.070000 audit: BPF prog-id=269 op=LOAD Jan 28 01:25:14.070000 audit[5898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5844 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532353839386165653466376235323538376232306630373533386239 Jan 28 01:25:14.084101 systemd[1]: Started cri-containerd-8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992.scope - libcontainer container 8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992. Jan 28 01:25:14.085930 systemd-networkd[2200]: cali4eca33595c8: Gained IPv6LL Jan 28 01:25:14.090000 audit[5961]: NETFILTER_CFG table=filter:138 family=2 entries=53 op=nft_register_chain pid=5961 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 01:25:14.090000 audit[5961]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7fffadad1170 a2=0 a3=7fffadad115c items=0 ppid=5220 pid=5961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.090000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 01:25:14.098336 containerd[2559]: time="2026-01-28T01:25:14.098299260Z" level=info msg="connecting to shim 4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac" address="unix:///run/containerd/s/618bf466ffecde47ee4c59472eff54719dca98c922a7b894081db55d57d1c13c" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:25:14.102004 containerd[2559]: time="2026-01-28T01:25:14.101984575Z" level=info msg="StartContainer for \"e25898aee4f7b52587b20f07538b9352105b3190957d4c88c1467d1829775234\" returns successfully" Jan 28 01:25:14.116000 audit: BPF prog-id=270 op=LOAD Jan 28 01:25:14.116000 audit: BPF prog-id=271 op=LOAD Jan 28 01:25:14.116000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5912 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861323337346532616635303538643661643631626438636134373461 Jan 28 01:25:14.117000 audit: BPF prog-id=271 op=UNLOAD Jan 28 01:25:14.117000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5912 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861323337346532616635303538643661643631626438636134373461 Jan 28 01:25:14.117000 audit: BPF prog-id=272 op=LOAD Jan 28 01:25:14.117000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5912 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861323337346532616635303538643661643631626438636134373461 Jan 28 01:25:14.117000 audit: BPF prog-id=273 op=LOAD Jan 28 01:25:14.117000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5912 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861323337346532616635303538643661643631626438636134373461 Jan 28 01:25:14.117000 audit: BPF prog-id=273 op=UNLOAD Jan 28 01:25:14.117000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5912 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861323337346532616635303538643661643631626438636134373461 Jan 28 01:25:14.117000 audit: BPF prog-id=272 op=UNLOAD Jan 28 01:25:14.117000 audit[5934]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5912 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861323337346532616635303538643661643631626438636134373461 Jan 28 01:25:14.117000 audit: BPF prog-id=274 op=LOAD Jan 28 01:25:14.117000 audit[5934]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5912 pid=5934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861323337346532616635303538643661643631626438636134373461 Jan 28 01:25:14.129156 systemd[1]: Started cri-containerd-4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac.scope - libcontainer container 4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac. Jan 28 01:25:14.143000 audit: BPF prog-id=275 op=LOAD Jan 28 01:25:14.144000 audit: BPF prog-id=276 op=LOAD Jan 28 01:25:14.144000 audit[5992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5981 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437393362323765653630653933633436643234393534373032316564 Jan 28 01:25:14.144000 audit: BPF prog-id=276 op=UNLOAD Jan 28 01:25:14.144000 audit[5992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5981 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437393362323765653630653933633436643234393534373032316564 Jan 28 01:25:14.144000 audit: BPF prog-id=277 op=LOAD Jan 28 01:25:14.144000 audit[5992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5981 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437393362323765653630653933633436643234393534373032316564 Jan 28 01:25:14.144000 audit: BPF prog-id=278 op=LOAD Jan 28 01:25:14.144000 audit[5992]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5981 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437393362323765653630653933633436643234393534373032316564 Jan 28 01:25:14.144000 audit: BPF prog-id=278 op=UNLOAD Jan 28 01:25:14.144000 audit[5992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5981 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437393362323765653630653933633436643234393534373032316564 Jan 28 01:25:14.144000 audit: BPF prog-id=277 op=UNLOAD Jan 28 01:25:14.144000 audit[5992]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5981 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437393362323765653630653933633436643234393534373032316564 Jan 28 01:25:14.144000 audit: BPF prog-id=279 op=LOAD Jan 28 01:25:14.144000 audit[5992]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5981 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437393362323765653630653933633436643234393534373032316564 Jan 28 01:25:14.179475 containerd[2559]: time="2026-01-28T01:25:14.179351073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9c776fd4-lg5zf,Uid:846cab91-e0a1-4344-ab2b-9358f550d758,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a2374e2af5058d6ad61bd8ca474ab7a9d442de494974525a43bcae2d7c16992\"" Jan 28 01:25:14.186225 containerd[2559]: time="2026-01-28T01:25:14.186192715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:25:14.189139 containerd[2559]: time="2026-01-28T01:25:14.189102796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-848997c984-fdfk6,Uid:f2bad97b-be33-4a5d-908e-d2048d5b9f4f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4793b27ee60e93c46d249547021ed6086b5b5272651ea4e247516a250135aaac\"" Jan 28 01:25:14.430289 containerd[2559]: time="2026-01-28T01:25:14.430181855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:14.432461 containerd[2559]: time="2026-01-28T01:25:14.432421204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:25:14.432536 containerd[2559]: time="2026-01-28T01:25:14.432491071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:14.432657 kubelet[4040]: E0128 01:25:14.432592 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:25:14.432657 kubelet[4040]: E0128 01:25:14.432630 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:25:14.432949 kubelet[4040]: E0128 01:25:14.432816 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9c776fd4-lg5zf_calico-system(846cab91-e0a1-4344-ab2b-9358f550d758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:14.432949 kubelet[4040]: E0128 01:25:14.432849 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:25:14.433312 containerd[2559]: time="2026-01-28T01:25:14.433283223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:25:14.676403 containerd[2559]: time="2026-01-28T01:25:14.676377556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:14.679146 containerd[2559]: time="2026-01-28T01:25:14.679108679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:25:14.679207 containerd[2559]: time="2026-01-28T01:25:14.679171396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:14.679294 kubelet[4040]: E0128 01:25:14.679259 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:14.679343 kubelet[4040]: E0128 01:25:14.679300 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:14.679372 kubelet[4040]: E0128 01:25:14.679359 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-fdfk6_calico-apiserver(f2bad97b-be33-4a5d-908e-d2048d5b9f4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:14.679404 kubelet[4040]: E0128 01:25:14.679391 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:25:14.807872 kubelet[4040]: E0128 01:25:14.807556 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:25:14.812646 kubelet[4040]: E0128 01:25:14.812607 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:25:14.812970 kubelet[4040]: E0128 01:25:14.812936 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:25:14.911000 audit[6028]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=6028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:14.913936 kernel: kauditd_printk_skb: 227 callbacks suppressed Jan 28 01:25:14.913997 kernel: audit: type=1325 audit(1769563514.911:747): table=filter:139 family=2 entries=14 op=nft_register_rule pid=6028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:14.911000 audit[6028]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4e68fe00 a2=0 a3=7ffc4e68fdec items=0 ppid=4194 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:14.921313 kernel: audit: type=1300 audit(1769563514.911:747): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4e68fe00 a2=0 a3=7ffc4e68fdec items=0 ppid=4194 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.921417 kernel: audit: type=1327 audit(1769563514.911:747): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:14.942000 audit[6028]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=6028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:14.942000 audit[6028]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc4e68fe00 a2=0 a3=7ffc4e68fdec items=0 ppid=4194 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.947879 kernel: audit: type=1325 audit(1769563514.942:748): table=nat:140 family=2 entries=56 op=nft_register_chain pid=6028 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:25:14.947933 kernel: audit: type=1300 audit(1769563514.942:748): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc4e68fe00 a2=0 a3=7ffc4e68fdec items=0 ppid=4194 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:25:14.942000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:14.950703 kernel: audit: type=1327 audit(1769563514.942:748): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:25:15.174017 systemd-networkd[2200]: cali5912826e880: Gained IPv6LL Jan 28 01:25:15.366006 systemd-networkd[2200]: califfcc29c6e5d: Gained IPv6LL Jan 28 01:25:15.366234 systemd-networkd[2200]: cali98d912e8bba: Gained IPv6LL Jan 28 01:25:15.814083 kubelet[4040]: E0128 01:25:15.813968 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:25:15.815581 kubelet[4040]: E0128 01:25:15.814171 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:25:15.824765 kubelet[4040]: I0128 01:25:15.824713 4040 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dj8xj" podStartSLOduration=40.82470149 podStartE2EDuration="40.82470149s" podCreationTimestamp="2026-01-28 01:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:25:14.875751565 +0000 UTC m=+46.318843446" watchObservedRunningTime="2026-01-28 01:25:15.82470149 +0000 UTC m=+47.267793371" Jan 28 01:25:17.658073 containerd[2559]: time="2026-01-28T01:25:17.658028782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:25:17.907849 containerd[2559]: time="2026-01-28T01:25:17.907817055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:17.910198 containerd[2559]: time="2026-01-28T01:25:17.910114591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:25:17.910198 containerd[2559]: time="2026-01-28T01:25:17.910177377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:17.910422 kubelet[4040]: E0128 01:25:17.910277 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:25:17.910422 kubelet[4040]: E0128 01:25:17.910311 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:25:17.910422 kubelet[4040]: E0128 01:25:17.910394 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:17.911869 containerd[2559]: time="2026-01-28T01:25:17.911829965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:25:18.155758 containerd[2559]: time="2026-01-28T01:25:18.155648694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:18.158959 containerd[2559]: time="2026-01-28T01:25:18.158927860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:25:18.159039 containerd[2559]: time="2026-01-28T01:25:18.158939034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:18.159166 kubelet[4040]: E0128 01:25:18.159124 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:25:18.159221 kubelet[4040]: E0128 01:25:18.159172 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:25:18.159247 kubelet[4040]: E0128 01:25:18.159233 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:18.159302 kubelet[4040]: E0128 01:25:18.159274 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:25:22.658455 containerd[2559]: time="2026-01-28T01:25:22.658329564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:25:22.899045 containerd[2559]: time="2026-01-28T01:25:22.899009014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:22.901337 containerd[2559]: time="2026-01-28T01:25:22.901315373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:25:22.901379 containerd[2559]: time="2026-01-28T01:25:22.901326987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:22.901487 kubelet[4040]: E0128 01:25:22.901460 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:25:22.901734 kubelet[4040]: E0128 01:25:22.901492 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:25:22.901734 kubelet[4040]: E0128 01:25:22.901582 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:22.902692 containerd[2559]: time="2026-01-28T01:25:22.902654058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:25:23.151754 containerd[2559]: time="2026-01-28T01:25:23.151720648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:23.154265 containerd[2559]: time="2026-01-28T01:25:23.154177520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:25:23.154265 containerd[2559]: time="2026-01-28T01:25:23.154204118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:23.154371 kubelet[4040]: E0128 01:25:23.154344 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:25:23.154446 kubelet[4040]: E0128 01:25:23.154380 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:25:23.154492 kubelet[4040]: E0128 01:25:23.154449 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:23.154517 kubelet[4040]: E0128 01:25:23.154492 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:25:24.658670 containerd[2559]: time="2026-01-28T01:25:24.658624858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:25:24.916000 containerd[2559]: time="2026-01-28T01:25:24.915893379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:24.918819 containerd[2559]: time="2026-01-28T01:25:24.918782597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:25:24.918936 containerd[2559]: time="2026-01-28T01:25:24.918799504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:24.919009 kubelet[4040]: E0128 01:25:24.918962 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:24.919264 kubelet[4040]: E0128 01:25:24.919012 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:24.919264 kubelet[4040]: E0128 01:25:24.919079 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-zqhg5_calico-apiserver(d99b8c9d-ad76-485a-94c4-e2c93263797f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:24.919264 kubelet[4040]: E0128 01:25:24.919110 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:25:28.658737 containerd[2559]: time="2026-01-28T01:25:28.658594769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:25:28.908210 containerd[2559]: time="2026-01-28T01:25:28.908170183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:28.910667 containerd[2559]: time="2026-01-28T01:25:28.910584626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:25:28.910667 containerd[2559]: time="2026-01-28T01:25:28.910608164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:28.910995 kubelet[4040]: E0128 01:25:28.910718 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:25:28.910995 kubelet[4040]: E0128 01:25:28.910757 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:25:28.910995 kubelet[4040]: E0128 01:25:28.910826 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zl6z4_calico-system(82792313-f307-46ab-a25e-04cde981d984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:28.910995 kubelet[4040]: E0128 01:25:28.910870 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:25:29.658274 containerd[2559]: time="2026-01-28T01:25:29.658242071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:25:29.916243 containerd[2559]: time="2026-01-28T01:25:29.916145034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:29.918434 containerd[2559]: time="2026-01-28T01:25:29.918404136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:25:29.918532 containerd[2559]: time="2026-01-28T01:25:29.918469076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:29.918612 kubelet[4040]: E0128 01:25:29.918576 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:29.918811 kubelet[4040]: E0128 01:25:29.918619 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:29.918811 kubelet[4040]: E0128 01:25:29.918697 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-fdfk6_calico-apiserver(f2bad97b-be33-4a5d-908e-d2048d5b9f4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:29.918811 kubelet[4040]: E0128 01:25:29.918730 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:25:30.659100 containerd[2559]: time="2026-01-28T01:25:30.659066271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:25:30.660486 kubelet[4040]: E0128 01:25:30.660443 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:25:30.904268 containerd[2559]: time="2026-01-28T01:25:30.904231624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:30.906888 containerd[2559]: time="2026-01-28T01:25:30.906851442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:25:30.906970 containerd[2559]: time="2026-01-28T01:25:30.906875639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:30.907092 kubelet[4040]: E0128 01:25:30.907037 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:25:30.907144 kubelet[4040]: E0128 01:25:30.907091 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:25:30.907219 kubelet[4040]: E0128 01:25:30.907194 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9c776fd4-lg5zf_calico-system(846cab91-e0a1-4344-ab2b-9358f550d758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:30.907301 kubelet[4040]: E0128 01:25:30.907236 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:25:37.658581 kubelet[4040]: E0128 01:25:37.658526 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:25:38.659683 kubelet[4040]: E0128 01:25:38.659639 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:25:41.658883 kubelet[4040]: E0128 01:25:41.658663 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:25:42.658662 kubelet[4040]: E0128 01:25:42.658511 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:25:44.659953 kubelet[4040]: E0128 01:25:44.659704 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:25:45.658514 containerd[2559]: time="2026-01-28T01:25:45.658464593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:25:45.910136 containerd[2559]: time="2026-01-28T01:25:45.910027648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:45.912789 containerd[2559]: time="2026-01-28T01:25:45.912704802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:25:45.912935 containerd[2559]: time="2026-01-28T01:25:45.912742901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:45.913084 kubelet[4040]: E0128 01:25:45.912965 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:25:45.913084 kubelet[4040]: E0128 01:25:45.913014 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:25:45.913444 kubelet[4040]: E0128 01:25:45.913121 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:45.914745 containerd[2559]: time="2026-01-28T01:25:45.914690006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:25:46.153390 containerd[2559]: time="2026-01-28T01:25:46.153245450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:46.155741 containerd[2559]: time="2026-01-28T01:25:46.155619219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:25:46.155741 containerd[2559]: time="2026-01-28T01:25:46.155716043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:46.156521 kubelet[4040]: E0128 01:25:46.156021 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:25:46.156521 kubelet[4040]: E0128 01:25:46.156067 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:25:46.156521 kubelet[4040]: E0128 01:25:46.156177 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:46.156664 kubelet[4040]: E0128 01:25:46.156573 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:25:50.659946 containerd[2559]: time="2026-01-28T01:25:50.659901238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:25:50.925661 containerd[2559]: time="2026-01-28T01:25:50.925428211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:50.928705 containerd[2559]: time="2026-01-28T01:25:50.928631142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:25:50.928829 containerd[2559]: time="2026-01-28T01:25:50.928729964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:50.928925 kubelet[4040]: E0128 01:25:50.928886 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:25:50.929199 kubelet[4040]: E0128 01:25:50.928927 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:25:50.929912 kubelet[4040]: E0128 01:25:50.929018 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:50.930609 containerd[2559]: time="2026-01-28T01:25:50.930575753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:25:51.176436 containerd[2559]: time="2026-01-28T01:25:51.176138706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:51.178581 containerd[2559]: time="2026-01-28T01:25:51.178540522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:25:51.178651 containerd[2559]: time="2026-01-28T01:25:51.178623228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:51.178809 kubelet[4040]: E0128 01:25:51.178773 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:25:51.178883 kubelet[4040]: E0128 01:25:51.178819 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:25:51.178946 kubelet[4040]: E0128 01:25:51.178927 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:51.179086 kubelet[4040]: E0128 01:25:51.179045 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:25:52.658352 containerd[2559]: time="2026-01-28T01:25:52.657854586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:25:52.900142 containerd[2559]: time="2026-01-28T01:25:52.900104332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:52.906474 containerd[2559]: time="2026-01-28T01:25:52.906447030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:25:52.906557 containerd[2559]: time="2026-01-28T01:25:52.906505795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:52.906639 kubelet[4040]: E0128 01:25:52.906603 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:52.906922 kubelet[4040]: E0128 01:25:52.906644 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:52.906922 kubelet[4040]: E0128 01:25:52.906707 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-zqhg5_calico-apiserver(d99b8c9d-ad76-485a-94c4-e2c93263797f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:52.906922 kubelet[4040]: E0128 01:25:52.906736 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:25:53.658495 containerd[2559]: time="2026-01-28T01:25:53.658435513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:25:53.975984 containerd[2559]: time="2026-01-28T01:25:53.975945178Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:53.978776 containerd[2559]: time="2026-01-28T01:25:53.978739601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:25:53.978876 containerd[2559]: time="2026-01-28T01:25:53.978818115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:53.979068 kubelet[4040]: E0128 01:25:53.978941 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:53.979068 kubelet[4040]: E0128 01:25:53.978979 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:25:53.981341 kubelet[4040]: E0128 01:25:53.981318 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-fdfk6_calico-apiserver(f2bad97b-be33-4a5d-908e-d2048d5b9f4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:53.981950 kubelet[4040]: E0128 01:25:53.981927 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:25:54.659775 containerd[2559]: time="2026-01-28T01:25:54.659643167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:25:54.906772 containerd[2559]: time="2026-01-28T01:25:54.906739265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:54.913069 containerd[2559]: time="2026-01-28T01:25:54.912997999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:25:54.913069 containerd[2559]: time="2026-01-28T01:25:54.913054151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:54.913298 kubelet[4040]: E0128 01:25:54.913157 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:25:54.913298 kubelet[4040]: E0128 01:25:54.913190 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:25:54.913298 kubelet[4040]: E0128 01:25:54.913249 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zl6z4_calico-system(82792313-f307-46ab-a25e-04cde981d984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:54.913298 kubelet[4040]: E0128 01:25:54.913281 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:25:56.660050 containerd[2559]: time="2026-01-28T01:25:56.660010245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:25:56.913678 containerd[2559]: time="2026-01-28T01:25:56.913382691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:25:56.915612 containerd[2559]: time="2026-01-28T01:25:56.915574736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:25:56.915812 containerd[2559]: time="2026-01-28T01:25:56.915649055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:25:56.916148 kubelet[4040]: E0128 01:25:56.915993 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:25:56.916148 kubelet[4040]: E0128 01:25:56.916134 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:25:56.916839 kubelet[4040]: E0128 01:25:56.916407 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9c776fd4-lg5zf_calico-system(846cab91-e0a1-4344-ab2b-9358f550d758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:25:56.916839 kubelet[4040]: E0128 01:25:56.916535 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:26:00.662312 kubelet[4040]: E0128 01:26:00.662266 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:26:05.658373 kubelet[4040]: E0128 01:26:05.658270 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:26:05.659885 kubelet[4040]: E0128 01:26:05.659815 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:26:07.658141 kubelet[4040]: E0128 01:26:07.658013 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:26:07.658903 kubelet[4040]: E0128 01:26:07.658013 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:26:09.657644 kubelet[4040]: E0128 01:26:09.657506 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:26:12.506885 systemd[1]: Started sshd@7-10.200.8.14:22-10.200.16.10:50524.service - OpenSSH per-connection server daemon (10.200.16.10:50524). Jan 28 01:26:12.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.14:22-10.200.16.10:50524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:12.514895 kernel: audit: type=1130 audit(1769563572.506:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.14:22-10.200.16.10:50524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:13.051884 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 50524 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:13.050000 audit[6114]: USER_ACCT pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.060405 kernel: audit: type=1101 audit(1769563573.050:750): pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.060420 sshd-session[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:13.058000 audit[6114]: CRED_ACQ pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.065908 systemd-logind[2535]: New session 11 of user core. Jan 28 01:26:13.075807 kernel: audit: type=1103 audit(1769563573.058:751): pid=6114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.075883 kernel: audit: type=1006 audit(1769563573.058:752): pid=6114 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 01:26:13.058000 audit[6114]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfdf50810 a2=3 a3=0 items=0 ppid=1 pid=6114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.082679 kernel: audit: type=1300 audit(1769563573.058:752): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfdf50810 a2=3 a3=0 items=0 ppid=1 pid=6114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:13.085037 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:26:13.058000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:13.087000 audit[6114]: USER_START pid=6114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.096623 kernel: audit: type=1327 audit(1769563573.058:752): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:13.096687 kernel: audit: type=1105 audit(1769563573.087:753): pid=6114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.091000 audit[6118]: CRED_ACQ pid=6118 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.103287 kernel: audit: type=1103 audit(1769563573.091:754): pid=6118 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.457686 sshd[6118]: Connection closed by 10.200.16.10 port 50524 Jan 28 01:26:13.458725 sshd-session[6114]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:13.459000 audit[6114]: USER_END pid=6114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.465805 systemd[1]: sshd@7-10.200.8.14:22-10.200.16.10:50524.service: Deactivated successfully. Jan 28 01:26:13.468233 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:26:13.469883 kernel: audit: type=1106 audit(1769563573.459:755): pid=6114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.459000 audit[6114]: CRED_DISP pid=6114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.476651 systemd-logind[2535]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:26:13.476873 kernel: audit: type=1104 audit(1769563573.459:756): pid=6114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:13.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.14:22-10.200.16.10:50524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:13.477709 systemd-logind[2535]: Removed session 11. Jan 28 01:26:13.658962 kubelet[4040]: E0128 01:26:13.658547 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:26:17.659613 kubelet[4040]: E0128 01:26:17.659547 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:26:18.568872 systemd[1]: Started sshd@8-10.200.8.14:22-10.200.16.10:50532.service - OpenSSH per-connection server daemon (10.200.16.10:50532). Jan 28 01:26:18.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.14:22-10.200.16.10:50532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:18.570290 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:26:18.570376 kernel: audit: type=1130 audit(1769563578.568:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.14:22-10.200.16.10:50532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:18.659871 kubelet[4040]: E0128 01:26:18.659804 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:26:19.109000 audit[6131]: USER_ACCT pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.110487 sshd[6131]: Accepted publickey for core from 10.200.16.10 port 50532 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:19.113767 sshd-session[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:19.112000 audit[6131]: CRED_ACQ pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.117142 kernel: audit: type=1101 audit(1769563579.109:759): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.117186 kernel: audit: type=1103 audit(1769563579.112:760): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.112000 audit[6131]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6d3a7770 a2=3 a3=0 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:19.124508 kernel: audit: type=1006 audit(1769563579.112:761): pid=6131 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 28 01:26:19.124558 kernel: audit: type=1300 audit(1769563579.112:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6d3a7770 a2=3 a3=0 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:19.112000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:19.128052 kernel: audit: type=1327 audit(1769563579.112:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:19.128002 systemd-logind[2535]: New session 12 of user core. Jan 28 01:26:19.135048 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:26:19.136000 audit[6131]: USER_START pid=6131 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.136000 audit[6135]: CRED_ACQ pid=6135 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.146625 kernel: audit: type=1105 audit(1769563579.136:762): pid=6131 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.146673 kernel: audit: type=1103 audit(1769563579.136:763): pid=6135 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.485549 sshd[6135]: Connection closed by 10.200.16.10 port 50532 Jan 28 01:26:19.487005 sshd-session[6131]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:19.487000 audit[6131]: USER_END pid=6131 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.491687 systemd[1]: sshd@8-10.200.8.14:22-10.200.16.10:50532.service: Deactivated successfully. Jan 28 01:26:19.494940 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:26:19.487000 audit[6131]: CRED_DISP pid=6131 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.498912 systemd-logind[2535]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:26:19.499737 systemd-logind[2535]: Removed session 12. Jan 28 01:26:19.501497 kernel: audit: type=1106 audit(1769563579.487:764): pid=6131 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.501608 kernel: audit: type=1104 audit(1769563579.487:765): pid=6131 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:19.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.14:22-10.200.16.10:50532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:19.658467 kubelet[4040]: E0128 01:26:19.658429 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:26:20.658284 kubelet[4040]: E0128 01:26:20.657621 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:26:22.659011 kubelet[4040]: E0128 01:26:22.658813 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:26:24.597030 systemd[1]: Started sshd@9-10.200.8.14:22-10.200.16.10:47076.service - OpenSSH per-connection server daemon (10.200.16.10:47076). Jan 28 01:26:24.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.14:22-10.200.16.10:47076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:24.598982 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:26:24.600084 kernel: audit: type=1130 audit(1769563584.596:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.14:22-10.200.16.10:47076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:25.147000 audit[6148]: USER_ACCT pid=6148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.149203 sshd[6148]: Accepted publickey for core from 10.200.16.10 port 47076 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:25.152123 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:25.154879 kernel: audit: type=1101 audit(1769563585.147:768): pid=6148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.147000 audit[6148]: CRED_ACQ pid=6148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.165873 kernel: audit: type=1103 audit(1769563585.147:769): pid=6148 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.172299 kernel: audit: type=1006 audit(1769563585.147:770): pid=6148 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 28 01:26:25.171945 systemd-logind[2535]: New session 13 of user core. Jan 28 01:26:25.147000 audit[6148]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe415eaa60 a2=3 a3=0 items=0 ppid=1 pid=6148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:25.178256 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:26:25.180006 kernel: audit: type=1300 audit(1769563585.147:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe415eaa60 a2=3 a3=0 items=0 ppid=1 pid=6148 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:25.147000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:25.186890 kernel: audit: type=1327 audit(1769563585.147:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:25.183000 audit[6148]: USER_START pid=6148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.195895 kernel: audit: type=1105 audit(1769563585.183:771): pid=6148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.184000 audit[6152]: CRED_ACQ pid=6152 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.202893 kernel: audit: type=1103 audit(1769563585.184:772): pid=6152 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.500750 sshd[6152]: Connection closed by 10.200.16.10 port 47076 Jan 28 01:26:25.501989 sshd-session[6148]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:25.502000 audit[6148]: USER_END pid=6148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.505449 systemd[1]: sshd@9-10.200.8.14:22-10.200.16.10:47076.service: Deactivated successfully. Jan 28 01:26:25.507546 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:26:25.510441 systemd-logind[2535]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:26:25.510939 kernel: audit: type=1106 audit(1769563585.502:773): pid=6148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.511022 kernel: audit: type=1104 audit(1769563585.502:774): pid=6148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.502000 audit[6148]: CRED_DISP pid=6148 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:25.514227 systemd-logind[2535]: Removed session 13. Jan 28 01:26:25.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.14:22-10.200.16.10:47076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:25.612324 systemd[1]: Started sshd@10-10.200.8.14:22-10.200.16.10:47084.service - OpenSSH per-connection server daemon (10.200.16.10:47084). Jan 28 01:26:25.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.14:22-10.200.16.10:47084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:25.659424 kubelet[4040]: E0128 01:26:25.659137 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:26:26.157000 audit[6171]: USER_ACCT pid=6171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:26.158179 sshd[6171]: Accepted publickey for core from 10.200.16.10 port 47084 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:26.158000 audit[6171]: CRED_ACQ pid=6171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:26.158000 audit[6171]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee628da30 a2=3 a3=0 items=0 ppid=1 pid=6171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:26.158000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:26.159935 sshd-session[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:26.165530 systemd-logind[2535]: New session 14 of user core. Jan 28 01:26:26.173044 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:26:26.175000 audit[6171]: USER_START pid=6171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:26.177000 audit[6175]: CRED_ACQ pid=6175 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:26.529864 sshd[6175]: Connection closed by 10.200.16.10 port 47084 Jan 28 01:26:26.530261 sshd-session[6171]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:26.530000 audit[6171]: USER_END pid=6171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:26.530000 audit[6171]: CRED_DISP pid=6171 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:26.533686 systemd[1]: sshd@10-10.200.8.14:22-10.200.16.10:47084.service: Deactivated successfully. Jan 28 01:26:26.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.14:22-10.200.16.10:47084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:26.535587 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:26:26.536381 systemd-logind[2535]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:26:26.537642 systemd-logind[2535]: Removed session 14. Jan 28 01:26:26.638440 systemd[1]: Started sshd@11-10.200.8.14:22-10.200.16.10:47100.service - OpenSSH per-connection server daemon (10.200.16.10:47100). Jan 28 01:26:26.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.14:22-10.200.16.10:47100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:27.174000 audit[6185]: USER_ACCT pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:27.175438 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 47100 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:27.175000 audit[6185]: CRED_ACQ pid=6185 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:27.175000 audit[6185]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd31c4d360 a2=3 a3=0 items=0 ppid=1 pid=6185 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:27.175000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:27.177456 sshd-session[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:27.184016 systemd-logind[2535]: New session 15 of user core. Jan 28 01:26:27.188117 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:26:27.189000 audit[6185]: USER_START pid=6185 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:27.190000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:27.553786 sshd[6189]: Connection closed by 10.200.16.10 port 47100 Jan 28 01:26:27.554999 sshd-session[6185]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:27.555000 audit[6185]: USER_END pid=6185 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:27.555000 audit[6185]: CRED_DISP pid=6185 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:27.560188 systemd[1]: sshd@11-10.200.8.14:22-10.200.16.10:47100.service: Deactivated successfully. Jan 28 01:26:27.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.14:22-10.200.16.10:47100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:27.563034 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:26:27.565053 systemd-logind[2535]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:26:27.568139 systemd-logind[2535]: Removed session 15. Jan 28 01:26:30.658256 kubelet[4040]: E0128 01:26:30.657604 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:26:31.657552 kubelet[4040]: E0128 01:26:31.657515 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:26:32.659674 kubelet[4040]: E0128 01:26:32.659630 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:26:32.661524 containerd[2559]: time="2026-01-28T01:26:32.661472121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:26:32.696315 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 01:26:32.696386 kernel: audit: type=1130 audit(1769563592.692:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.14:22-10.200.16.10:45588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:32.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.14:22-10.200.16.10:45588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:32.693418 systemd[1]: Started sshd@12-10.200.8.14:22-10.200.16.10:45588.service - OpenSSH per-connection server daemon (10.200.16.10:45588). Jan 28 01:26:32.902931 containerd[2559]: time="2026-01-28T01:26:32.902895615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:32.905386 containerd[2559]: time="2026-01-28T01:26:32.905360253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:26:32.905439 containerd[2559]: time="2026-01-28T01:26:32.905419541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:32.905555 kubelet[4040]: E0128 01:26:32.905525 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:32.905599 kubelet[4040]: E0128 01:26:32.905560 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:26:32.905673 kubelet[4040]: E0128 01:26:32.905633 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:32.906408 containerd[2559]: time="2026-01-28T01:26:32.906246991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:26:33.162582 containerd[2559]: time="2026-01-28T01:26:33.162476828Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:33.164804 containerd[2559]: time="2026-01-28T01:26:33.164767266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:26:33.165318 containerd[2559]: time="2026-01-28T01:26:33.164841087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:33.165370 kubelet[4040]: E0128 01:26:33.165013 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:33.165370 kubelet[4040]: E0128 01:26:33.165071 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:26:33.165370 kubelet[4040]: E0128 01:26:33.165158 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-wlbng_calico-system(d9af8dd1-e2bd-462a-8a21-d0c27cf0950b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:33.165370 kubelet[4040]: E0128 01:26:33.165300 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:26:33.235000 audit[6206]: USER_ACCT pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.239823 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:33.240336 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 45588 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:33.244993 kernel: audit: type=1101 audit(1769563593.235:795): pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.245057 kernel: audit: type=1103 audit(1769563593.238:796): pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.238000 audit[6206]: CRED_ACQ pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.247642 kernel: audit: type=1006 audit(1769563593.238:797): pid=6206 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 28 01:26:33.238000 audit[6206]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0173470 a2=3 a3=0 items=0 ppid=1 pid=6206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:33.251235 kernel: audit: type=1300 audit(1769563593.238:797): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0173470 a2=3 a3=0 items=0 ppid=1 pid=6206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:33.251710 systemd-logind[2535]: New session 16 of user core. Jan 28 01:26:33.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:33.256113 kernel: audit: type=1327 audit(1769563593.238:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:33.263966 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:26:33.274017 kernel: audit: type=1105 audit(1769563593.267:798): pid=6206 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.267000 audit[6206]: USER_START pid=6206 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.277000 audit[6210]: CRED_ACQ pid=6210 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.282962 kernel: audit: type=1103 audit(1769563593.277:799): pid=6210 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.579902 sshd[6210]: Connection closed by 10.200.16.10 port 45588 Jan 28 01:26:33.580531 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:33.580000 audit[6206]: USER_END pid=6206 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.583631 systemd-logind[2535]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:26:33.585477 systemd[1]: sshd@12-10.200.8.14:22-10.200.16.10:45588.service: Deactivated successfully. Jan 28 01:26:33.587651 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:26:33.580000 audit[6206]: CRED_DISP pid=6206 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.589775 systemd-logind[2535]: Removed session 16. Jan 28 01:26:33.593031 kernel: audit: type=1106 audit(1769563593.580:800): pid=6206 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.593077 kernel: audit: type=1104 audit(1769563593.580:801): pid=6206 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:33.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.14:22-10.200.16.10:45588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:33.657770 kubelet[4040]: E0128 01:26:33.657747 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:26:36.660707 containerd[2559]: time="2026-01-28T01:26:36.660587669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:26:36.919532 containerd[2559]: time="2026-01-28T01:26:36.919311278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:36.921770 containerd[2559]: time="2026-01-28T01:26:36.921667064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:26:36.921770 containerd[2559]: time="2026-01-28T01:26:36.921749763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:36.922059 kubelet[4040]: E0128 01:26:36.922031 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:36.922675 kubelet[4040]: E0128 01:26:36.922346 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:26:36.922675 kubelet[4040]: E0128 01:26:36.922429 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:36.923917 containerd[2559]: time="2026-01-28T01:26:36.923412811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:26:37.177948 containerd[2559]: time="2026-01-28T01:26:37.176637467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:37.180731 containerd[2559]: time="2026-01-28T01:26:37.180617184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:26:37.180731 containerd[2559]: time="2026-01-28T01:26:37.180710107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:37.181028 kubelet[4040]: E0128 01:26:37.180998 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:37.181106 kubelet[4040]: E0128 01:26:37.181096 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:26:37.181277 kubelet[4040]: E0128 01:26:37.181204 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-84b945c4fd-89dj9_calico-system(a2440718-b2ad-4d13-a123-aa3f90357d80): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:37.181403 kubelet[4040]: E0128 01:26:37.181258 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:26:38.700294 systemd[1]: Started sshd@13-10.200.8.14:22-10.200.16.10:45590.service - OpenSSH per-connection server daemon (10.200.16.10:45590). Jan 28 01:26:38.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.14:22-10.200.16.10:45590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:38.702520 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:26:38.702594 kernel: audit: type=1130 audit(1769563598.699:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.14:22-10.200.16.10:45590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:39.283000 audit[6251]: USER_ACCT pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.285111 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 45590 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:39.288424 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:39.286000 audit[6251]: CRED_ACQ pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.291625 kernel: audit: type=1101 audit(1769563599.283:804): pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.291680 kernel: audit: type=1103 audit(1769563599.286:805): pid=6251 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.295298 kernel: audit: type=1006 audit(1769563599.286:806): pid=6251 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 28 01:26:39.286000 audit[6251]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2b015cd0 a2=3 a3=0 items=0 ppid=1 pid=6251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:39.298949 kernel: audit: type=1300 audit(1769563599.286:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2b015cd0 a2=3 a3=0 items=0 ppid=1 pid=6251 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:39.286000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:39.300742 systemd-logind[2535]: New session 17 of user core. Jan 28 01:26:39.302408 kernel: audit: type=1327 audit(1769563599.286:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:39.310024 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:26:39.311000 audit[6251]: USER_START pid=6251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.317000 audit[6269]: CRED_ACQ pid=6269 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.321184 kernel: audit: type=1105 audit(1769563599.311:807): pid=6251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.321240 kernel: audit: type=1103 audit(1769563599.317:808): pid=6269 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.641796 sshd[6269]: Connection closed by 10.200.16.10 port 45590 Jan 28 01:26:39.642992 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:39.643000 audit[6251]: USER_END pid=6251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.649156 systemd[1]: sshd@13-10.200.8.14:22-10.200.16.10:45590.service: Deactivated successfully. Jan 28 01:26:39.643000 audit[6251]: CRED_DISP pid=6251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.652628 kernel: audit: type=1106 audit(1769563599.643:809): pid=6251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.652682 kernel: audit: type=1104 audit(1769563599.643:810): pid=6251 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:39.653582 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:26:39.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.14:22-10.200.16.10:45590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:39.655667 systemd-logind[2535]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:26:39.656433 systemd-logind[2535]: Removed session 17. Jan 28 01:26:41.657509 containerd[2559]: time="2026-01-28T01:26:41.657404565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:41.904610 containerd[2559]: time="2026-01-28T01:26:41.904563479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:41.907100 containerd[2559]: time="2026-01-28T01:26:41.907059275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:41.907100 containerd[2559]: time="2026-01-28T01:26:41.907084655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:41.907269 kubelet[4040]: E0128 01:26:41.907232 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:41.907641 kubelet[4040]: E0128 01:26:41.907278 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:41.907641 kubelet[4040]: E0128 01:26:41.907342 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-fdfk6_calico-apiserver(f2bad97b-be33-4a5d-908e-d2048d5b9f4f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:41.907641 kubelet[4040]: E0128 01:26:41.907372 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:26:43.658075 containerd[2559]: time="2026-01-28T01:26:43.658021459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:26:43.901096 containerd[2559]: time="2026-01-28T01:26:43.900958223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:43.903622 containerd[2559]: time="2026-01-28T01:26:43.903520922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:26:43.903622 containerd[2559]: time="2026-01-28T01:26:43.903599820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:43.903910 kubelet[4040]: E0128 01:26:43.903883 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:43.904638 kubelet[4040]: E0128 01:26:43.904192 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:26:43.904638 kubelet[4040]: E0128 01:26:43.904282 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9c776fd4-lg5zf_calico-system(846cab91-e0a1-4344-ab2b-9358f550d758): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:43.904750 kubelet[4040]: E0128 01:26:43.904315 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:26:44.658515 containerd[2559]: time="2026-01-28T01:26:44.658475970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:26:44.756695 systemd[1]: Started sshd@14-10.200.8.14:22-10.200.16.10:39944.service - OpenSSH per-connection server daemon (10.200.16.10:39944). Jan 28 01:26:44.758103 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:26:44.758137 kernel: audit: type=1130 audit(1769563604.755:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.14:22-10.200.16.10:39944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:44.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.14:22-10.200.16.10:39944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:44.903681 containerd[2559]: time="2026-01-28T01:26:44.902639902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:44.905502 containerd[2559]: time="2026-01-28T01:26:44.905468979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:26:44.906401 containerd[2559]: time="2026-01-28T01:26:44.905514735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:44.906569 kubelet[4040]: E0128 01:26:44.906534 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:44.906818 kubelet[4040]: E0128 01:26:44.906592 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:26:44.906881 containerd[2559]: time="2026-01-28T01:26:44.906848746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:26:44.907173 kubelet[4040]: E0128 01:26:44.907146 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-848997c984-zqhg5_calico-apiserver(d99b8c9d-ad76-485a-94c4-e2c93263797f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:44.907390 kubelet[4040]: E0128 01:26:44.907369 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:26:45.156268 containerd[2559]: time="2026-01-28T01:26:45.156236294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:26:45.160875 containerd[2559]: time="2026-01-28T01:26:45.159438506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:26:45.160875 containerd[2559]: time="2026-01-28T01:26:45.159504135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 01:26:45.160975 kubelet[4040]: E0128 01:26:45.159613 4040 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:45.160975 kubelet[4040]: E0128 01:26:45.159664 4040 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:26:45.160975 kubelet[4040]: E0128 01:26:45.159727 4040 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zl6z4_calico-system(82792313-f307-46ab-a25e-04cde981d984): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:26:45.160975 kubelet[4040]: E0128 01:26:45.159760 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:26:45.322000 audit[6288]: USER_ACCT pid=6288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.328913 kernel: audit: type=1101 audit(1769563605.322:813): pid=6288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.329747 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 39944 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:45.331061 sshd-session[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:45.329000 audit[6288]: CRED_ACQ pid=6288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.340894 kernel: audit: type=1103 audit(1769563605.329:814): pid=6288 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.344437 systemd-logind[2535]: New session 18 of user core. Jan 28 01:26:45.350873 kernel: audit: type=1006 audit(1769563605.329:815): pid=6288 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 28 01:26:45.351209 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:26:45.329000 audit[6288]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecb1b46d0 a2=3 a3=0 items=0 ppid=1 pid=6288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:45.359599 kernel: audit: type=1300 audit(1769563605.329:815): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecb1b46d0 a2=3 a3=0 items=0 ppid=1 pid=6288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:45.329000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:45.367772 kernel: audit: type=1327 audit(1769563605.329:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:45.367816 kernel: audit: type=1105 audit(1769563605.357:816): pid=6288 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.357000 audit[6288]: USER_START pid=6288 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.359000 audit[6292]: CRED_ACQ pid=6292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.372881 kernel: audit: type=1103 audit(1769563605.359:817): pid=6292 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.658051 kubelet[4040]: E0128 01:26:45.658015 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:26:45.703005 sshd[6292]: Connection closed by 10.200.16.10 port 39944 Jan 28 01:26:45.703397 sshd-session[6288]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:45.704000 audit[6288]: USER_END pid=6288 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.707532 systemd[1]: sshd@14-10.200.8.14:22-10.200.16.10:39944.service: Deactivated successfully. Jan 28 01:26:45.709307 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:26:45.712444 systemd-logind[2535]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:26:45.704000 audit[6288]: CRED_DISP pid=6288 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.716011 systemd-logind[2535]: Removed session 18. Jan 28 01:26:45.718381 kernel: audit: type=1106 audit(1769563605.704:818): pid=6288 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.718455 kernel: audit: type=1104 audit(1769563605.704:819): pid=6288 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:45.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.14:22-10.200.16.10:39944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:45.818100 systemd[1]: Started sshd@15-10.200.8.14:22-10.200.16.10:39958.service - OpenSSH per-connection server daemon (10.200.16.10:39958). Jan 28 01:26:45.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.14:22-10.200.16.10:39958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:46.351000 audit[6304]: USER_ACCT pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:46.352523 sshd[6304]: Accepted publickey for core from 10.200.16.10 port 39958 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:46.352000 audit[6304]: CRED_ACQ pid=6304 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:46.352000 audit[6304]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6b428df0 a2=3 a3=0 items=0 ppid=1 pid=6304 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:46.352000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:46.353979 sshd-session[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:46.357756 systemd-logind[2535]: New session 19 of user core. Jan 28 01:26:46.364759 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:26:46.369000 audit[6304]: USER_START pid=6304 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:46.371000 audit[6308]: CRED_ACQ pid=6308 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:46.800338 sshd[6308]: Connection closed by 10.200.16.10 port 39958 Jan 28 01:26:46.800807 sshd-session[6304]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:46.801000 audit[6304]: USER_END pid=6304 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:46.801000 audit[6304]: CRED_DISP pid=6304 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:46.805200 systemd[1]: sshd@15-10.200.8.14:22-10.200.16.10:39958.service: Deactivated successfully. Jan 28 01:26:46.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.14:22-10.200.16.10:39958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:46.808099 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:26:46.809599 systemd-logind[2535]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:26:46.811064 systemd-logind[2535]: Removed session 19. Jan 28 01:26:46.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.14:22-10.200.16.10:39962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:46.914107 systemd[1]: Started sshd@16-10.200.8.14:22-10.200.16.10:39962.service - OpenSSH per-connection server daemon (10.200.16.10:39962). Jan 28 01:26:47.472000 audit[6320]: USER_ACCT pid=6320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:47.473345 sshd[6320]: Accepted publickey for core from 10.200.16.10 port 39962 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:47.473000 audit[6320]: CRED_ACQ pid=6320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:47.473000 audit[6320]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd16830470 a2=3 a3=0 items=0 ppid=1 pid=6320 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:47.473000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:47.476037 sshd-session[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:47.481457 systemd-logind[2535]: New session 20 of user core. Jan 28 01:26:47.489031 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:26:47.490000 audit[6320]: USER_START pid=6320 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:47.491000 audit[6324]: CRED_ACQ pid=6324 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:48.060000 audit[6334]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=6334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:48.060000 audit[6334]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc64163580 a2=0 a3=7ffc6416356c items=0 ppid=4194 pid=6334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:48.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:48.065000 audit[6334]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=6334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:48.065000 audit[6334]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc64163580 a2=0 a3=0 items=0 ppid=4194 pid=6334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:48.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:48.167324 sshd[6324]: Connection closed by 10.200.16.10 port 39962 Jan 28 01:26:48.167620 sshd-session[6320]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:48.168000 audit[6320]: USER_END pid=6320 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:48.168000 audit[6320]: CRED_DISP pid=6320 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:48.172690 systemd[1]: sshd@16-10.200.8.14:22-10.200.16.10:39962.service: Deactivated successfully. Jan 28 01:26:48.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.14:22-10.200.16.10:39962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:48.173356 systemd-logind[2535]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:26:48.177938 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:26:48.181578 systemd-logind[2535]: Removed session 20. Jan 28 01:26:48.282264 systemd[1]: Started sshd@17-10.200.8.14:22-10.200.16.10:39974.service - OpenSSH per-connection server daemon (10.200.16.10:39974). Jan 28 01:26:48.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.14:22-10.200.16.10:39974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:48.740983 update_engine[2538]: I20260128 01:26:48.740937 2538 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:26:48.740983 update_engine[2538]: I20260128 01:26:48.740980 2538 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:26:48.741409 update_engine[2538]: I20260128 01:26:48.741103 2538 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:26:48.741439 update_engine[2538]: I20260128 01:26:48.741422 2538 omaha_request_params.cc:62] Current group set to alpha Jan 28 01:26:48.742039 update_engine[2538]: I20260128 01:26:48.741529 2538 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:26:48.742039 update_engine[2538]: I20260128 01:26:48.741540 2538 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:26:48.742039 update_engine[2538]: I20260128 01:26:48.741559 2538 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:26:48.742039 update_engine[2538]: I20260128 01:26:48.741591 2538 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:26:48.742039 update_engine[2538]: I20260128 01:26:48.741638 2538 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:26:48.742039 update_engine[2538]: I20260128 01:26:48.741643 2538 omaha_request_action.cc:272] Request: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: Jan 28 01:26:48.742039 update_engine[2538]: I20260128 01:26:48.741649 2538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:26:48.743874 locksmithd[2631]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:26:48.744531 update_engine[2538]: I20260128 01:26:48.744503 2538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:26:48.745115 update_engine[2538]: I20260128 01:26:48.745095 2538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:26:48.772824 update_engine[2538]: E20260128 01:26:48.772791 2538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:26:48.773044 update_engine[2538]: I20260128 01:26:48.772986 2538 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:26:48.844000 audit[6339]: USER_ACCT pid=6339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:48.845603 sshd[6339]: Accepted publickey for core from 10.200.16.10 port 39974 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:48.845000 audit[6339]: CRED_ACQ pid=6339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:48.845000 audit[6339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd73c4aa0 a2=3 a3=0 items=0 ppid=1 pid=6339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:48.845000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:48.846990 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:48.850755 systemd-logind[2535]: New session 21 of user core. Jan 28 01:26:48.855020 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:26:48.856000 audit[6339]: USER_START pid=6339 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:48.857000 audit[6343]: CRED_ACQ pid=6343 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.081000 audit[6351]: NETFILTER_CFG table=filter:143 family=2 entries=38 op=nft_register_rule pid=6351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:49.081000 audit[6351]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffceb6ae1d0 a2=0 a3=7ffceb6ae1bc items=0 ppid=4194 pid=6351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:49.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:49.087000 audit[6351]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=6351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:49.087000 audit[6351]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffceb6ae1d0 a2=0 a3=0 items=0 ppid=4194 pid=6351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:49.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:49.288770 sshd[6343]: Connection closed by 10.200.16.10 port 39974 Jan 28 01:26:49.289742 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:49.289000 audit[6339]: USER_END pid=6339 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.290000 audit[6339]: CRED_DISP pid=6339 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.292360 systemd[1]: sshd@17-10.200.8.14:22-10.200.16.10:39974.service: Deactivated successfully. Jan 28 01:26:49.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.14:22-10.200.16.10:39974 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:49.294030 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:26:49.295762 systemd-logind[2535]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:26:49.296584 systemd-logind[2535]: Removed session 21. Jan 28 01:26:49.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.14:22-10.200.16.10:39976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:49.400640 systemd[1]: Started sshd@18-10.200.8.14:22-10.200.16.10:39976.service - OpenSSH per-connection server daemon (10.200.16.10:39976). Jan 28 01:26:49.952903 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 28 01:26:49.953008 kernel: audit: type=1101 audit(1769563609.947:853): pid=6356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.947000 audit[6356]: USER_ACCT pid=6356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.952666 sshd-session[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:49.955157 kernel: audit: type=1103 audit(1769563609.948:854): pid=6356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.948000 audit[6356]: CRED_ACQ pid=6356 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.955587 sshd[6356]: Accepted publickey for core from 10.200.16.10 port 39976 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:49.963717 kernel: audit: type=1006 audit(1769563609.948:855): pid=6356 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 28 01:26:49.969065 kernel: audit: type=1300 audit(1769563609.948:855): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc881d18d0 a2=3 a3=0 items=0 ppid=1 pid=6356 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:49.948000 audit[6356]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc881d18d0 a2=3 a3=0 items=0 ppid=1 pid=6356 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:49.948000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:49.971895 kernel: audit: type=1327 audit(1769563609.948:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:49.973579 systemd-logind[2535]: New session 22 of user core. Jan 28 01:26:49.981060 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:26:49.984000 audit[6356]: USER_START pid=6356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.999774 kernel: audit: type=1105 audit(1769563609.984:856): pid=6356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:49.998000 audit[6360]: CRED_ACQ pid=6360 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:50.004875 kernel: audit: type=1103 audit(1769563609.998:857): pid=6360 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:50.358918 sshd[6360]: Connection closed by 10.200.16.10 port 39976 Jan 28 01:26:50.360905 sshd-session[6356]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:50.369977 kernel: audit: type=1106 audit(1769563610.361:858): pid=6356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:50.361000 audit[6356]: USER_END pid=6356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:50.361000 audit[6356]: CRED_DISP pid=6356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:50.371408 systemd[1]: sshd@18-10.200.8.14:22-10.200.16.10:39976.service: Deactivated successfully. Jan 28 01:26:50.375908 kernel: audit: type=1104 audit(1769563610.361:859): pid=6356 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:50.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.14:22-10.200.16.10:39976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:50.378437 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:26:50.379898 kernel: audit: type=1131 audit(1769563610.372:860): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.14:22-10.200.16.10:39976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:50.380792 systemd-logind[2535]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:26:50.383645 systemd-logind[2535]: Removed session 22. Jan 28 01:26:51.659386 kubelet[4040]: E0128 01:26:51.659331 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:26:52.150000 audit[6372]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=6372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:52.150000 audit[6372]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7873fbb0 a2=0 a3=7ffc7873fb9c items=0 ppid=4194 pid=6372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:52.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:52.157000 audit[6372]: NETFILTER_CFG table=nat:146 family=2 entries=104 op=nft_register_chain pid=6372 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 01:26:52.157000 audit[6372]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc7873fbb0 a2=0 a3=7ffc7873fb9c items=0 ppid=4194 pid=6372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:52.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 01:26:55.480505 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 01:26:55.480601 kernel: audit: type=1130 audit(1769563615.471:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.14:22-10.200.16.10:48038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:55.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.14:22-10.200.16.10:48038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:55.472093 systemd[1]: Started sshd@19-10.200.8.14:22-10.200.16.10:48038.service - OpenSSH per-connection server daemon (10.200.16.10:48038). Jan 28 01:26:56.009000 audit[6374]: USER_ACCT pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.010715 sshd[6374]: Accepted publickey for core from 10.200.16.10 port 48038 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:26:56.014045 sshd-session[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:26:56.017898 kernel: audit: type=1101 audit(1769563616.009:864): pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.017969 kernel: audit: type=1103 audit(1769563616.012:865): pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.012000 audit[6374]: CRED_ACQ pid=6374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.023602 kernel: audit: type=1006 audit(1769563616.012:866): pid=6374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 28 01:26:56.012000 audit[6374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe06d9a570 a2=3 a3=0 items=0 ppid=1 pid=6374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:56.027316 kernel: audit: type=1300 audit(1769563616.012:866): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe06d9a570 a2=3 a3=0 items=0 ppid=1 pid=6374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:26:56.012000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:56.030568 kernel: audit: type=1327 audit(1769563616.012:866): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:26:56.030812 systemd-logind[2535]: New session 23 of user core. Jan 28 01:26:56.037044 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:26:56.038000 audit[6374]: USER_START pid=6374 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.038000 audit[6378]: CRED_ACQ pid=6378 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.047990 kernel: audit: type=1105 audit(1769563616.038:867): pid=6374 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.048027 kernel: audit: type=1103 audit(1769563616.038:868): pid=6378 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.431940 sshd[6378]: Connection closed by 10.200.16.10 port 48038 Jan 28 01:26:56.434144 sshd-session[6374]: pam_unix(sshd:session): session closed for user core Jan 28 01:26:56.443909 kernel: audit: type=1106 audit(1769563616.434:869): pid=6374 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.443988 kernel: audit: type=1104 audit(1769563616.434:870): pid=6374 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.434000 audit[6374]: USER_END pid=6374 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.434000 audit[6374]: CRED_DISP pid=6374 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:26:56.438148 systemd[1]: sshd@19-10.200.8.14:22-10.200.16.10:48038.service: Deactivated successfully. Jan 28 01:26:56.440642 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:26:56.445882 systemd-logind[2535]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:26:56.447293 systemd-logind[2535]: Removed session 23. Jan 28 01:26:56.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.14:22-10.200.16.10:48038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:26:56.667074 kubelet[4040]: E0128 01:26:56.665242 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:26:56.667696 kubelet[4040]: E0128 01:26:56.667673 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:26:56.668440 kubelet[4040]: E0128 01:26:56.668404 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:26:57.657806 kubelet[4040]: E0128 01:26:57.657748 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:26:58.658990 kubelet[4040]: E0128 01:26:58.658779 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:26:58.743036 update_engine[2538]: I20260128 01:26:58.742992 2538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:26:58.743313 update_engine[2538]: I20260128 01:26:58.743065 2538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:26:58.743386 update_engine[2538]: I20260128 01:26:58.743366 2538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:26:58.779833 update_engine[2538]: E20260128 01:26:58.779798 2538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:26:58.779961 update_engine[2538]: I20260128 01:26:58.779893 2538 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:27:01.546677 systemd[1]: Started sshd@20-10.200.8.14:22-10.200.16.10:45986.service - OpenSSH per-connection server daemon (10.200.16.10:45986). Jan 28 01:27:01.554302 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:27:01.554377 kernel: audit: type=1130 audit(1769563621.546:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.14:22-10.200.16.10:45986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:01.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.14:22-10.200.16.10:45986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:02.095000 audit[6389]: USER_ACCT pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.100882 sshd-session[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:02.101693 sshd[6389]: Accepted publickey for core from 10.200.16.10 port 45986 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:27:02.098000 audit[6389]: CRED_ACQ pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.104268 kernel: audit: type=1101 audit(1769563622.095:873): pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.104331 kernel: audit: type=1103 audit(1769563622.098:874): pid=6389 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.107920 kernel: audit: type=1006 audit(1769563622.098:875): pid=6389 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 28 01:27:02.098000 audit[6389]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd7a7cc30 a2=3 a3=0 items=0 ppid=1 pid=6389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:27:02.112058 kernel: audit: type=1300 audit(1769563622.098:875): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd7a7cc30 a2=3 a3=0 items=0 ppid=1 pid=6389 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:27:02.113332 systemd-logind[2535]: New session 24 of user core. Jan 28 01:27:02.116006 kernel: audit: type=1327 audit(1769563622.098:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:27:02.098000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:27:02.122997 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:27:02.134176 kernel: audit: type=1105 audit(1769563622.124:876): pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.124000 audit[6389]: USER_START pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.133000 audit[6393]: CRED_ACQ pid=6393 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.141890 kernel: audit: type=1103 audit(1769563622.133:877): pid=6393 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.445135 sshd[6393]: Connection closed by 10.200.16.10 port 45986 Jan 28 01:27:02.445976 sshd-session[6389]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:02.445000 audit[6389]: USER_END pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.449781 systemd[1]: sshd@20-10.200.8.14:22-10.200.16.10:45986.service: Deactivated successfully. Jan 28 01:27:02.452287 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:27:02.455146 systemd-logind[2535]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:27:02.445000 audit[6389]: CRED_DISP pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.456386 systemd-logind[2535]: Removed session 24. Jan 28 01:27:02.459533 kernel: audit: type=1106 audit(1769563622.445:878): pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.459581 kernel: audit: type=1104 audit(1769563622.445:879): pid=6389 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:02.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.14:22-10.200.16.10:45986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:02.660241 kubelet[4040]: E0128 01:27:02.660055 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84b945c4fd-89dj9" podUID="a2440718-b2ad-4d13-a123-aa3f90357d80" Jan 28 01:27:07.565691 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:27:07.565800 kernel: audit: type=1130 audit(1769563627.557:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.14:22-10.200.16.10:46002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:07.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.14:22-10.200.16.10:46002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:07.558328 systemd[1]: Started sshd@21-10.200.8.14:22-10.200.16.10:46002.service - OpenSSH per-connection server daemon (10.200.16.10:46002). Jan 28 01:27:07.658054 kubelet[4040]: E0128 01:27:07.658018 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zl6z4" podUID="82792313-f307-46ab-a25e-04cde981d984" Jan 28 01:27:07.660147 kubelet[4040]: E0128 01:27:07.660112 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-fdfk6" podUID="f2bad97b-be33-4a5d-908e-d2048d5b9f4f" Jan 28 01:27:08.105000 audit[6433]: USER_ACCT pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.107993 sshd-session[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:08.109226 sshd[6433]: Accepted publickey for core from 10.200.16.10 port 46002 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:27:08.116173 kernel: audit: type=1101 audit(1769563628.105:882): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.116254 kernel: audit: type=1103 audit(1769563628.106:883): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.116276 kernel: audit: type=1006 audit(1769563628.106:884): pid=6433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 28 01:27:08.106000 audit[6433]: CRED_ACQ pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.106000 audit[6433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde86df7e0 a2=3 a3=0 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:27:08.124823 kernel: audit: type=1300 audit(1769563628.106:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde86df7e0 a2=3 a3=0 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:27:08.124939 systemd-logind[2535]: New session 25 of user core. Jan 28 01:27:08.106000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:27:08.126933 kernel: audit: type=1327 audit(1769563628.106:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:27:08.129101 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:27:08.130000 audit[6433]: USER_START pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.131000 audit[6437]: CRED_ACQ pid=6437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.139917 kernel: audit: type=1105 audit(1769563628.130:885): pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.139952 kernel: audit: type=1103 audit(1769563628.131:886): pid=6437 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.454138 sshd[6437]: Connection closed by 10.200.16.10 port 46002 Jan 28 01:27:08.455119 sshd-session[6433]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:08.455000 audit[6433]: USER_END pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.458777 systemd[1]: sshd@21-10.200.8.14:22-10.200.16.10:46002.service: Deactivated successfully. Jan 28 01:27:08.461623 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:27:08.464877 kernel: audit: type=1106 audit(1769563628.455:887): pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.455000 audit[6433]: CRED_DISP pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.465248 systemd-logind[2535]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:27:08.466896 systemd-logind[2535]: Removed session 25. Jan 28 01:27:08.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.14:22-10.200.16.10:46002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:08.472971 kernel: audit: type=1104 audit(1769563628.455:888): pid=6433 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:08.658996 kubelet[4040]: E0128 01:27:08.658933 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wlbng" podUID="d9af8dd1-e2bd-462a-8a21-d0c27cf0950b" Jan 28 01:27:08.741223 update_engine[2538]: I20260128 01:27:08.741181 2538 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:27:08.741453 update_engine[2538]: I20260128 01:27:08.741252 2538 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:27:08.741624 update_engine[2538]: I20260128 01:27:08.741595 2538 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:27:08.757156 update_engine[2538]: E20260128 01:27:08.757125 2538 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 28 01:27:08.757221 update_engine[2538]: I20260128 01:27:08.757195 2538 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:27:09.658140 kubelet[4040]: E0128 01:27:09.658069 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9c776fd4-lg5zf" podUID="846cab91-e0a1-4344-ab2b-9358f550d758" Jan 28 01:27:09.658140 kubelet[4040]: E0128 01:27:09.658085 4040 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-848997c984-zqhg5" podUID="d99b8c9d-ad76-485a-94c4-e2c93263797f" Jan 28 01:27:13.585992 systemd[1]: Started sshd@22-10.200.8.14:22-10.200.16.10:53872.service - OpenSSH per-connection server daemon (10.200.16.10:53872). Jan 28 01:27:13.589892 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 01:27:13.589963 kernel: audit: type=1130 audit(1769563633.585:890): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.14:22-10.200.16.10:53872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:13.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.14:22-10.200.16.10:53872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 01:27:14.128000 audit[6448]: USER_ACCT pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.137990 kernel: audit: type=1101 audit(1769563634.128:891): pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.138061 sshd[6448]: Accepted publickey for core from 10.200.16.10 port 53872 ssh2: RSA SHA256:VtHR68MlTOJLpw2VkW0PUocZAiO696J+LJJgc3ffFuY Jan 28 01:27:14.139335 sshd-session[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:27:14.137000 audit[6448]: CRED_ACQ pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.151034 kernel: audit: type=1103 audit(1769563634.137:892): pid=6448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.155255 systemd-logind[2535]: New session 26 of user core. Jan 28 01:27:14.161878 kernel: audit: type=1006 audit(1769563634.137:893): pid=6448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 28 01:27:14.163056 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:27:14.137000 audit[6448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc253608e0 a2=3 a3=0 items=0 ppid=1 pid=6448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:27:14.174873 kernel: audit: type=1300 audit(1769563634.137:893): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc253608e0 a2=3 a3=0 items=0 ppid=1 pid=6448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 01:27:14.137000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:27:14.179889 kernel: audit: type=1327 audit(1769563634.137:893): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 01:27:14.179942 kernel: audit: type=1105 audit(1769563634.174:894): pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.174000 audit[6448]: USER_START pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.178000 audit[6453]: CRED_ACQ pid=6453 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.198154 kernel: audit: type=1103 audit(1769563634.178:895): pid=6453 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.492324 sshd[6453]: Connection closed by 10.200.16.10 port 53872 Jan 28 01:27:14.493741 sshd-session[6448]: pam_unix(sshd:session): session closed for user core Jan 28 01:27:14.493000 audit[6448]: USER_END pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.496609 systemd[1]: sshd@22-10.200.8.14:22-10.200.16.10:53872.service: Deactivated successfully. Jan 28 01:27:14.498769 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:27:14.500811 systemd-logind[2535]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:27:14.493000 audit[6448]: CRED_DISP pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.503133 systemd-logind[2535]: Removed session 26. Jan 28 01:27:14.505817 kernel: audit: type=1106 audit(1769563634.493:896): pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.505974 kernel: audit: type=1104 audit(1769563634.493:897): pid=6448 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 28 01:27:14.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.14:22-10.200.16.10:53872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'