Jan 20 15:09:05.010670 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 12:22:36 -00 2026 Jan 20 15:09:05.010715 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 15:09:05.010725 kernel: BIOS-provided physical RAM map: Jan 20 15:09:05.010734 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 15:09:05.010740 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 15:09:05.010746 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 15:09:05.010753 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 15:09:05.010759 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 15:09:05.010765 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 15:09:05.010771 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 15:09:05.010777 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 15:09:05.010785 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 15:09:05.010791 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 15:09:05.010798 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 15:09:05.010805 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 15:09:05.010811 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 15:09:05.010820 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 15:09:05.010826 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 15:09:05.010832 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 15:09:05.010839 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 15:09:05.010845 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 15:09:05.010851 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 15:09:05.010880 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 15:09:05.010886 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 15:09:05.010893 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 15:09:05.010916 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 15:09:05.010926 kernel: NX (Execute Disable) protection: active Jan 20 15:09:05.010960 kernel: APIC: Static calls initialized Jan 20 15:09:05.010968 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 15:09:05.010974 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 15:09:05.010981 kernel: extended physical RAM map: Jan 20 15:09:05.010987 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 15:09:05.010994 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 15:09:05.011000 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 15:09:05.011021 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 15:09:05.011028 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 15:09:05.011035 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 15:09:05.011059 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 15:09:05.011066 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 15:09:05.011072 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 15:09:05.011082 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 15:09:05.011091 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 15:09:05.011098 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 15:09:05.011105 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 15:09:05.011149 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 15:09:05.011157 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 15:09:05.011164 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 15:09:05.011171 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 15:09:05.011177 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 15:09:05.011184 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 15:09:05.011194 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 15:09:05.011200 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 15:09:05.011207 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 15:09:05.011214 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 15:09:05.011221 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 15:09:05.011227 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 15:09:05.011234 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 15:09:05.011241 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 15:09:05.011248 kernel: efi: EFI v2.7 by EDK II Jan 20 15:09:05.011283 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 15:09:05.011290 kernel: random: crng init done Jan 20 15:09:05.011299 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 15:09:05.011306 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 15:09:05.011313 kernel: secureboot: Secure boot disabled Jan 20 15:09:05.011320 kernel: SMBIOS 2.8 present. Jan 20 15:09:05.011326 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 15:09:05.011333 kernel: DMI: Memory slots populated: 1/1 Jan 20 15:09:05.011340 kernel: Hypervisor detected: KVM Jan 20 15:09:05.011346 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 15:09:05.011353 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 15:09:05.011360 kernel: kvm-clock: using sched offset of 11234529139 cycles Jan 20 15:09:05.011367 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 15:09:05.011376 kernel: tsc: Detected 2445.424 MHz processor Jan 20 15:09:05.011384 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 15:09:05.011391 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 15:09:05.011398 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 15:09:05.011405 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 15:09:05.011412 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 15:09:05.011419 kernel: Using GB pages for direct mapping Jan 20 15:09:05.011427 kernel: ACPI: Early table checksum verification disabled Jan 20 15:09:05.011434 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 15:09:05.011442 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 15:09:05.011449 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:09:05.011456 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:09:05.011463 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 15:09:05.011470 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:09:05.011479 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:09:05.011486 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:09:05.011493 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 15:09:05.011500 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 15:09:05.011507 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 15:09:05.011514 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 15:09:05.011521 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 15:09:05.011530 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 15:09:05.011537 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 15:09:05.011544 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 15:09:05.011550 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 15:09:05.011557 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 15:09:05.011564 kernel: No NUMA configuration found Jan 20 15:09:05.011571 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 15:09:05.011578 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 15:09:05.011588 kernel: Zone ranges: Jan 20 15:09:05.011595 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 15:09:05.011602 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 15:09:05.011609 kernel: Normal empty Jan 20 15:09:05.011616 kernel: Device empty Jan 20 15:09:05.011622 kernel: Movable zone start for each node Jan 20 15:09:05.011629 kernel: Early memory node ranges Jan 20 15:09:05.011636 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 15:09:05.011645 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 15:09:05.011652 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 15:09:05.011659 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 15:09:05.011666 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 15:09:05.011673 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 15:09:05.011680 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 15:09:05.011686 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 15:09:05.011696 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 15:09:05.011703 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 15:09:05.011717 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 15:09:05.011726 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 15:09:05.011733 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 15:09:05.011740 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 15:09:05.011748 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 15:09:05.011755 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 15:09:05.011762 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 15:09:05.011769 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 15:09:05.011779 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 15:09:05.011786 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 15:09:05.011793 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 15:09:05.011801 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 15:09:05.011810 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 15:09:05.011817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 15:09:05.011825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 15:09:05.011832 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 15:09:05.011839 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 15:09:05.011846 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 15:09:05.011853 kernel: TSC deadline timer available Jan 20 15:09:05.011863 kernel: CPU topo: Max. logical packages: 1 Jan 20 15:09:05.011870 kernel: CPU topo: Max. logical dies: 1 Jan 20 15:09:05.011877 kernel: CPU topo: Max. dies per package: 1 Jan 20 15:09:05.011884 kernel: CPU topo: Max. threads per core: 1 Jan 20 15:09:05.011891 kernel: CPU topo: Num. cores per package: 4 Jan 20 15:09:05.011898 kernel: CPU topo: Num. threads per package: 4 Jan 20 15:09:05.011905 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 15:09:05.011912 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 15:09:05.011922 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 15:09:05.011929 kernel: kvm-guest: setup PV sched yield Jan 20 15:09:05.011936 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 15:09:05.011944 kernel: Booting paravirtualized kernel on KVM Jan 20 15:09:05.011951 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 15:09:05.011959 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 15:09:05.011966 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 15:09:05.011975 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 15:09:05.011983 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 15:09:05.011990 kernel: kvm-guest: PV spinlocks enabled Jan 20 15:09:05.011997 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 15:09:05.012005 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 15:09:05.012013 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 15:09:05.012020 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 15:09:05.012029 kernel: Fallback order for Node 0: 0 Jan 20 15:09:05.012037 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 15:09:05.012044 kernel: Policy zone: DMA32 Jan 20 15:09:05.012051 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 15:09:05.012058 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 15:09:05.012065 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 15:09:05.012073 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 15:09:05.012082 kernel: Dynamic Preempt: voluntary Jan 20 15:09:05.012089 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 15:09:05.012097 kernel: rcu: RCU event tracing is enabled. Jan 20 15:09:05.012105 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 15:09:05.012160 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 15:09:05.012168 kernel: Rude variant of Tasks RCU enabled. Jan 20 15:09:05.012175 kernel: Tracing variant of Tasks RCU enabled. Jan 20 15:09:05.012183 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 15:09:05.012193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 15:09:05.012200 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 15:09:05.012208 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 15:09:05.012215 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 15:09:05.012222 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 15:09:05.012230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 15:09:05.012237 kernel: Console: colour dummy device 80x25 Jan 20 15:09:05.012247 kernel: printk: legacy console [ttyS0] enabled Jan 20 15:09:05.012282 kernel: ACPI: Core revision 20240827 Jan 20 15:09:05.012290 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 15:09:05.012297 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 15:09:05.012305 kernel: x2apic enabled Jan 20 15:09:05.012312 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 15:09:05.012319 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 15:09:05.012329 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 15:09:05.012336 kernel: kvm-guest: setup PV IPIs Jan 20 15:09:05.012344 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 15:09:05.012351 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 20 15:09:05.012358 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 20 15:09:05.012366 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 15:09:05.012373 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 15:09:05.012382 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 15:09:05.012390 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 15:09:05.012397 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 15:09:05.012404 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 15:09:05.012412 kernel: Speculative Store Bypass: Vulnerable Jan 20 15:09:05.012419 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 15:09:05.012427 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 15:09:05.012437 kernel: active return thunk: srso_alias_return_thunk Jan 20 15:09:05.012445 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 15:09:05.012452 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 15:09:05.012459 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 15:09:05.012467 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 15:09:05.012474 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 15:09:05.012481 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 15:09:05.012490 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 15:09:05.012498 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 15:09:05.012505 kernel: Freeing SMP alternatives memory: 32K Jan 20 15:09:05.012512 kernel: pid_max: default: 32768 minimum: 301 Jan 20 15:09:05.012519 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 15:09:05.012527 kernel: landlock: Up and running. Jan 20 15:09:05.012534 kernel: SELinux: Initializing. Jan 20 15:09:05.012543 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 15:09:05.012551 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 15:09:05.012558 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 15:09:05.012565 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 15:09:05.012572 kernel: signal: max sigframe size: 1776 Jan 20 15:09:05.012580 kernel: rcu: Hierarchical SRCU implementation. Jan 20 15:09:05.012587 kernel: rcu: Max phase no-delay instances is 400. Jan 20 15:09:05.012596 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 15:09:05.012604 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 15:09:05.012611 kernel: smp: Bringing up secondary CPUs ... Jan 20 15:09:05.012618 kernel: smpboot: x86: Booting SMP configuration: Jan 20 15:09:05.012625 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 15:09:05.012633 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 15:09:05.012640 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 20 15:09:05.012649 kernel: Memory: 2439048K/2565800K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 120812K reserved, 0K cma-reserved) Jan 20 15:09:05.012657 kernel: devtmpfs: initialized Jan 20 15:09:05.012664 kernel: x86/mm: Memory block size: 128MB Jan 20 15:09:05.012671 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 15:09:05.012679 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 15:09:05.012686 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 15:09:05.012693 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 15:09:05.012703 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 15:09:05.012710 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 15:09:05.012718 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 15:09:05.012725 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 15:09:05.012732 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 15:09:05.012739 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 15:09:05.012747 kernel: audit: initializing netlink subsys (disabled) Jan 20 15:09:05.012756 kernel: audit: type=2000 audit(1768921738.331:1): state=initialized audit_enabled=0 res=1 Jan 20 15:09:05.012764 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 15:09:05.012771 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 15:09:05.012778 kernel: cpuidle: using governor menu Jan 20 15:09:05.012785 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 15:09:05.012793 kernel: dca service started, version 1.12.1 Jan 20 15:09:05.012800 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 15:09:05.012809 kernel: PCI: Using configuration type 1 for base access Jan 20 15:09:05.012816 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 15:09:05.012824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 15:09:05.012831 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 15:09:05.012838 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 15:09:05.012845 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 15:09:05.012852 kernel: ACPI: Added _OSI(Module Device) Jan 20 15:09:05.012862 kernel: ACPI: Added _OSI(Processor Device) Jan 20 15:09:05.012869 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 15:09:05.012876 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 15:09:05.012884 kernel: ACPI: Interpreter enabled Jan 20 15:09:05.012891 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 15:09:05.012898 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 15:09:05.012905 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 15:09:05.012915 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 15:09:05.012922 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 15:09:05.012929 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 15:09:05.013220 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 15:09:05.013440 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 15:09:05.013617 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 15:09:05.013632 kernel: PCI host bridge to bus 0000:00 Jan 20 15:09:05.013804 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 15:09:05.013962 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 15:09:05.015225 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 15:09:05.015509 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 15:09:05.015725 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 15:09:05.015939 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 15:09:05.016221 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 15:09:05.016625 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 15:09:05.016864 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 15:09:05.017090 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 15:09:05.017493 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 15:09:05.017713 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 15:09:05.017934 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 15:09:05.018327 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 15:09:05.018552 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 15:09:05.018765 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 15:09:05.018985 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 15:09:05.019382 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 15:09:05.019604 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 15:09:05.019817 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 15:09:05.020038 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 15:09:05.020527 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 15:09:05.020753 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 15:09:05.021012 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 15:09:05.021348 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 15:09:05.021569 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 15:09:05.021790 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 15:09:05.022015 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 15:09:05.022343 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 15:09:05.022563 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 15:09:05.022775 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 15:09:05.022997 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 15:09:05.023331 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 15:09:05.023348 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 15:09:05.023360 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 15:09:05.023372 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 15:09:05.023383 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 15:09:05.023394 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 15:09:05.023405 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 15:09:05.023420 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 15:09:05.023431 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 15:09:05.023442 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 15:09:05.023454 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 15:09:05.023465 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 15:09:05.023477 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 15:09:05.023488 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 15:09:05.023501 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 15:09:05.023513 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 15:09:05.023524 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 15:09:05.023535 kernel: iommu: Default domain type: Translated Jan 20 15:09:05.023546 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 15:09:05.023557 kernel: efivars: Registered efivars operations Jan 20 15:09:05.023568 kernel: PCI: Using ACPI for IRQ routing Jan 20 15:09:05.023579 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 15:09:05.023593 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 15:09:05.023604 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 15:09:05.023615 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 15:09:05.023626 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 15:09:05.023637 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 15:09:05.023648 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 15:09:05.023659 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 15:09:05.023673 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 15:09:05.023887 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 15:09:05.024357 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 15:09:05.024576 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 15:09:05.024590 kernel: vgaarb: loaded Jan 20 15:09:05.024602 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 15:09:05.024618 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 15:09:05.024629 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 15:09:05.024640 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 15:09:05.024653 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 15:09:05.024666 kernel: pnp: PnP ACPI init Jan 20 15:09:05.024924 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 15:09:05.024941 kernel: pnp: PnP ACPI: found 6 devices Jan 20 15:09:05.024957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 15:09:05.024968 kernel: NET: Registered PF_INET protocol family Jan 20 15:09:05.024980 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 15:09:05.024991 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 15:09:05.025002 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 15:09:05.025014 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 15:09:05.025175 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 15:09:05.025216 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 15:09:05.025228 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 15:09:05.025240 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 15:09:05.025284 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 15:09:05.025297 kernel: NET: Registered PF_XDP protocol family Jan 20 15:09:05.025522 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 15:09:05.025780 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 15:09:05.025980 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 15:09:05.026246 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 15:09:05.026491 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 15:09:05.026689 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 15:09:05.026885 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 15:09:05.027080 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 15:09:05.027187 kernel: PCI: CLS 0 bytes, default 64 Jan 20 15:09:05.027199 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 20 15:09:05.027212 kernel: Initialise system trusted keyrings Jan 20 15:09:05.027223 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 15:09:05.027235 kernel: Key type asymmetric registered Jan 20 15:09:05.027246 kernel: Asymmetric key parser 'x509' registered Jan 20 15:09:05.027292 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 15:09:05.027333 kernel: io scheduler mq-deadline registered Jan 20 15:09:05.027345 kernel: io scheduler kyber registered Jan 20 15:09:05.027357 kernel: io scheduler bfq registered Jan 20 15:09:05.027368 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 15:09:05.027380 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 15:09:05.027392 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 15:09:05.027404 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 15:09:05.027441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 15:09:05.027453 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 15:09:05.027465 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 15:09:05.027477 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 15:09:05.027512 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 15:09:05.027739 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 15:09:05.027755 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 15:09:05.027966 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 15:09:05.028237 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T15:09:02 UTC (1768921742) Jan 20 15:09:05.028489 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 15:09:05.028543 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 15:09:05.028555 kernel: efifb: probing for efifb Jan 20 15:09:05.028567 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 15:09:05.028579 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 15:09:05.028590 kernel: efifb: scrolling: redraw Jan 20 15:09:05.028602 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 15:09:05.028613 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 15:09:05.028625 kernel: fb0: EFI VGA frame buffer device Jan 20 15:09:05.028663 kernel: pstore: Using crash dump compression: deflate Jan 20 15:09:05.028675 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 15:09:05.028686 kernel: NET: Registered PF_INET6 protocol family Jan 20 15:09:05.028698 kernel: Segment Routing with IPv6 Jan 20 15:09:05.028709 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 15:09:05.028746 kernel: NET: Registered PF_PACKET protocol family Jan 20 15:09:05.028758 kernel: Key type dns_resolver registered Jan 20 15:09:05.028794 kernel: IPI shorthand broadcast: enabled Jan 20 15:09:05.028806 kernel: sched_clock: Marking stable (2867022983, 2575892407)->(6188517213, -745601823) Jan 20 15:09:05.028817 kernel: registered taskstats version 1 Jan 20 15:09:05.028829 kernel: Loading compiled-in X.509 certificates Jan 20 15:09:05.028865 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 34a030021dd6c1575d5ad60346eaf4cdadaee6ef' Jan 20 15:09:05.028877 kernel: Demotion targets for Node 0: null Jan 20 15:09:05.028888 kernel: Key type .fscrypt registered Jan 20 15:09:05.028924 kernel: Key type fscrypt-provisioning registered Jan 20 15:09:05.028935 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 15:09:05.028947 kernel: ima: Allocated hash algorithm: sha1 Jan 20 15:09:05.028959 kernel: ima: No architecture policies found Jan 20 15:09:05.028970 kernel: clk: Disabling unused clocks Jan 20 15:09:05.028982 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 20 15:09:05.028993 kernel: Write protecting the kernel read-only data: 47104k Jan 20 15:09:05.029029 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 15:09:05.029041 kernel: Run /init as init process Jan 20 15:09:05.029052 kernel: with arguments: Jan 20 15:09:05.029064 kernel: /init Jan 20 15:09:05.029075 kernel: with environment: Jan 20 15:09:05.029089 kernel: HOME=/ Jan 20 15:09:05.029101 kernel: TERM=linux Jan 20 15:09:05.029177 kernel: SCSI subsystem initialized Jan 20 15:09:05.029189 kernel: libata version 3.00 loaded. Jan 20 15:09:05.029450 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 15:09:05.029467 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 15:09:05.029678 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 15:09:05.029892 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 15:09:05.030103 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 15:09:05.030466 kernel: scsi host0: ahci Jan 20 15:09:05.030700 kernel: scsi host1: ahci Jan 20 15:09:05.030927 kernel: scsi host2: ahci Jan 20 15:09:05.031437 kernel: scsi host3: ahci Jan 20 15:09:05.031675 kernel: scsi host4: ahci Jan 20 15:09:05.031953 kernel: scsi host5: ahci Jan 20 15:09:05.031970 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 20 15:09:05.031982 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 20 15:09:05.031994 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 20 15:09:05.032005 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 20 15:09:05.032017 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 20 15:09:05.032029 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 20 15:09:05.032074 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 15:09:05.032086 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 15:09:05.032097 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 15:09:05.032109 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 15:09:05.032178 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 15:09:05.032190 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 15:09:05.032202 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 15:09:05.032244 kernel: ata3.00: applying bridge limits Jan 20 15:09:05.032289 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 15:09:05.032302 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 15:09:05.032313 kernel: ata3.00: configured for UDMA/100 Jan 20 15:09:05.032575 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 15:09:05.032807 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 15:09:05.033066 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 15:09:05.033082 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 15:09:05.033094 kernel: GPT:16515071 != 27000831 Jan 20 15:09:05.033105 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 15:09:05.033176 kernel: GPT:16515071 != 27000831 Jan 20 15:09:05.033188 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 15:09:05.033199 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 15:09:05.033513 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 15:09:05.033530 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 15:09:05.033763 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 15:09:05.033778 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 15:09:05.033790 kernel: device-mapper: uevent: version 1.0.3 Jan 20 15:09:05.033802 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 15:09:05.033850 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 15:09:05.033863 kernel: raid6: avx2x4 gen() 29333 MB/s Jan 20 15:09:05.033874 kernel: raid6: avx2x2 gen() 34984 MB/s Jan 20 15:09:05.033886 kernel: raid6: avx2x1 gen() 27423 MB/s Jan 20 15:09:05.033898 kernel: raid6: using algorithm avx2x2 gen() 34984 MB/s Jan 20 15:09:05.033912 kernel: raid6: .... xor() 29749 MB/s, rmw enabled Jan 20 15:09:05.033924 kernel: raid6: using avx2x2 recovery algorithm Jan 20 15:09:05.033936 kernel: xor: automatically using best checksumming function avx Jan 20 15:09:05.033975 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 15:09:05.033987 kernel: BTRFS: device fsid 17137bed-8163-406c-98f9-6d4bb6770bf0 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (181) Jan 20 15:09:05.033999 kernel: BTRFS info (device dm-0): first mount of filesystem 17137bed-8163-406c-98f9-6d4bb6770bf0 Jan 20 15:09:05.034011 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:09:05.034023 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 15:09:05.034034 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 15:09:05.034047 kernel: loop: module loaded Jan 20 15:09:05.034340 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 15:09:05.034355 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 15:09:05.034368 systemd[1]: Successfully made /usr/ read-only. Jan 20 15:09:05.034382 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 15:09:05.034396 systemd[1]: Detected virtualization kvm. Jan 20 15:09:05.034440 systemd[1]: Detected architecture x86-64. Jan 20 15:09:05.034452 systemd[1]: Running in initrd. Jan 20 15:09:05.034464 systemd[1]: No hostname configured, using default hostname. Jan 20 15:09:05.034476 systemd[1]: Hostname set to . Jan 20 15:09:05.034488 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 15:09:05.034500 systemd[1]: Queued start job for default target initrd.target. Jan 20 15:09:05.034511 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 15:09:05.034549 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 15:09:05.034561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 15:09:05.034574 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 15:09:05.034587 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 15:09:05.034600 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 15:09:05.034641 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 15:09:05.034654 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 15:09:05.034666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 15:09:05.034679 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 15:09:05.034690 systemd[1]: Reached target paths.target - Path Units. Jan 20 15:09:05.034703 systemd[1]: Reached target slices.target - Slice Units. Jan 20 15:09:05.034715 systemd[1]: Reached target swap.target - Swaps. Jan 20 15:09:05.034750 systemd[1]: Reached target timers.target - Timer Units. Jan 20 15:09:05.034763 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 15:09:05.034775 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 15:09:05.034787 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 15:09:05.034800 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 15:09:05.034812 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 15:09:05.034824 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 15:09:05.034860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 15:09:05.034872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 15:09:05.034885 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 15:09:05.034897 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 15:09:05.034909 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 15:09:05.034921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 15:09:05.034934 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 15:09:05.034971 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 15:09:05.034984 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 15:09:05.034996 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 15:09:05.035007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 15:09:05.035044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:09:05.035056 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 15:09:05.035069 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 15:09:05.035081 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 15:09:05.035094 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 15:09:05.035195 systemd-journald[316]: Collecting audit messages is enabled. Jan 20 15:09:05.035292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 15:09:05.035334 kernel: audit: type=1130 audit(1768921745.023:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.035347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:09:05.035384 systemd-journald[316]: Journal started Jan 20 15:09:05.035408 systemd-journald[316]: Runtime Journal (/run/log/journal/0dec5f1e29cd46a284e133387715869b) is 6M, max 48M, 42M free. Jan 20 15:09:05.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.046186 kernel: audit: type=1130 audit(1768921745.037:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.046212 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 15:09:05.050758 kernel: audit: type=1130 audit(1768921745.050:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.054302 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 15:09:05.073221 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 15:09:05.077186 kernel: Bridge firewalling registered Jan 20 15:09:05.077241 systemd-modules-load[319]: Inserted module 'br_netfilter' Jan 20 15:09:05.082950 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 15:09:05.084939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 15:09:05.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.086039 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 15:09:05.094892 kernel: audit: type=1130 audit(1768921745.086:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.095189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 15:09:05.111972 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 15:09:05.127454 kernel: audit: type=1130 audit(1768921745.112:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.123488 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 15:09:05.138333 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 15:09:05.144399 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 15:09:05.159061 kernel: audit: type=1130 audit(1768921745.145:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.159307 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 15:09:05.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.174039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 15:09:05.187018 kernel: audit: type=1130 audit(1768921745.166:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.187044 kernel: audit: type=1130 audit(1768921745.175:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.187056 kernel: audit: type=1334 audit(1768921745.175:10): prog-id=6 op=LOAD Jan 20 15:09:05.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.175000 audit: BPF prog-id=6 op=LOAD Jan 20 15:09:05.185328 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 15:09:05.202295 dracut-cmdline[352]: dracut-109 Jan 20 15:09:05.209192 dracut-cmdline[352]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=12b88438810927d105cc313bb8ab13d0435c94d44cc3ab3377801865133595f9 Jan 20 15:09:05.252467 systemd-resolved[361]: Positive Trust Anchors: Jan 20 15:09:05.252498 systemd-resolved[361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 15:09:05.252503 systemd-resolved[361]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 15:09:05.252530 systemd-resolved[361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 15:09:05.285083 systemd-resolved[361]: Defaulting to hostname 'linux'. Jan 20 15:09:05.299233 kernel: audit: type=1130 audit(1768921745.287:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.286825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 15:09:05.293299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 15:09:05.691341 kernel: Loading iSCSI transport class v2.0-870. Jan 20 15:09:05.709187 kernel: iscsi: registered transport (tcp) Jan 20 15:09:05.743937 kernel: iscsi: registered transport (qla4xxx) Jan 20 15:09:05.744027 kernel: QLogic iSCSI HBA Driver Jan 20 15:09:05.784054 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 15:09:05.809934 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 15:09:05.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.822433 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 15:09:05.833860 kernel: audit: type=1130 audit(1768921745.819:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.891517 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 15:09:05.904432 kernel: audit: type=1130 audit(1768921745.892:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.895015 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 15:09:05.935032 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 15:09:05.964830 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 15:09:05.982561 kernel: audit: type=1130 audit(1768921745.972:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:05.982000 audit: BPF prog-id=7 op=LOAD Jan 20 15:09:05.983922 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 15:09:05.998790 kernel: audit: type=1334 audit(1768921745.982:15): prog-id=7 op=LOAD Jan 20 15:09:05.999009 kernel: audit: type=1334 audit(1768921745.982:16): prog-id=8 op=LOAD Jan 20 15:09:05.982000 audit: BPF prog-id=8 op=LOAD Jan 20 15:09:06.034615 systemd-udevd[565]: Using default interface naming scheme 'v257'. Jan 20 15:09:06.054728 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 15:09:06.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:06.059410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 15:09:06.120712 dracut-pre-trigger[616]: rd.md=0: removing MD RAID activation Jan 20 15:09:06.195244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 15:09:06.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:06.204357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 15:09:06.209744 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 15:09:06.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:06.238000 audit: BPF prog-id=9 op=LOAD Jan 20 15:09:06.239851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 15:09:06.315442 systemd-networkd[726]: lo: Link UP Jan 20 15:09:06.315467 systemd-networkd[726]: lo: Gained carrier Jan 20 15:09:06.321052 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 15:09:06.324683 systemd[1]: Reached target network.target - Network. Jan 20 15:09:06.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:06.336323 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 15:09:06.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:06.347079 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 15:09:06.410167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 15:09:06.428658 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 15:09:06.454047 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 15:09:06.473180 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 15:09:06.482923 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 15:09:06.496566 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 15:09:06.938424 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 15:09:06.954841 kernel: AES CTR mode by8 optimization enabled Jan 20 15:09:06.938755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:09:06.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:06.958368 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:09:06.966622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:09:06.985232 disk-uuid[805]: Primary Header is updated. Jan 20 15:09:06.985232 disk-uuid[805]: Secondary Entries is updated. Jan 20 15:09:06.985232 disk-uuid[805]: Secondary Header is updated. Jan 20 15:09:07.002818 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 15:09:07.010179 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:09:07.010209 systemd-networkd[726]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 15:09:07.028801 systemd-networkd[726]: eth0: Link UP Jan 20 15:09:07.031912 systemd-networkd[726]: eth0: Gained carrier Jan 20 15:09:07.031929 systemd-networkd[726]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:09:07.047247 systemd-networkd[726]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 15:09:07.061680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:09:07.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:07.109358 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 15:09:07.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:07.115754 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 15:09:07.117056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 15:09:07.132332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 15:09:07.139752 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 15:09:07.176215 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 15:09:07.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:08.069014 disk-uuid[823]: Warning: The kernel is still using the old partition table. Jan 20 15:09:08.069014 disk-uuid[823]: The new table will be used at the next reboot or after you Jan 20 15:09:08.069014 disk-uuid[823]: run partprobe(8) or kpartx(8) Jan 20 15:09:08.069014 disk-uuid[823]: The operation has completed successfully. Jan 20 15:09:08.090541 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 15:09:08.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:08.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:08.090768 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 15:09:08.095087 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 15:09:08.108928 systemd-networkd[726]: eth0: Gained IPv6LL Jan 20 15:09:08.161234 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 20 15:09:08.165318 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:09:08.165384 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:09:08.178043 kernel: BTRFS info (device vda6): turning on async discard Jan 20 15:09:08.178452 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 15:09:08.194311 kernel: BTRFS info (device vda6): last unmount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:09:08.196925 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 15:09:08.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:08.205947 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 15:09:08.820842 ignition[879]: Ignition 2.24.0 Jan 20 15:09:08.820981 ignition[879]: Stage: fetch-offline Jan 20 15:09:08.822102 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 20 15:09:08.822220 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:09:08.833320 ignition[879]: parsed url from cmdline: "" Jan 20 15:09:08.833349 ignition[879]: no config URL provided Jan 20 15:09:08.835710 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 15:09:08.835751 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 20 15:09:08.835874 ignition[879]: op(1): [started] loading QEMU firmware config module Jan 20 15:09:08.835880 ignition[879]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 15:09:08.863925 ignition[879]: op(1): [finished] loading QEMU firmware config module Jan 20 15:09:08.863993 ignition[879]: QEMU firmware config was not found. Ignoring... Jan 20 15:09:09.147696 ignition[879]: parsing config with SHA512: 14cb8e61868384e6b4e18881ddba5d9dc34dca796b7e1ac7fb6e9b4cb88bbdf3d675bde2e51ebd061417920593caf0968b71bae9dc07306c31938810fbad5886 Jan 20 15:09:09.187843 unknown[879]: fetched base config from "system" Jan 20 15:09:09.187859 unknown[879]: fetched user config from "qemu" Jan 20 15:09:09.196071 ignition[879]: fetch-offline: fetch-offline passed Jan 20 15:09:09.199579 ignition[879]: Ignition finished successfully Jan 20 15:09:09.204085 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 15:09:09.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:09.206058 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 15:09:09.207586 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 15:09:09.302996 ignition[888]: Ignition 2.24.0 Jan 20 15:09:09.303055 ignition[888]: Stage: kargs Jan 20 15:09:09.305447 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 20 15:09:09.305500 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:09:09.307394 ignition[888]: kargs: kargs passed Jan 20 15:09:09.307454 ignition[888]: Ignition finished successfully Jan 20 15:09:09.323896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 15:09:09.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:09.329176 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 15:09:09.598066 ignition[896]: Ignition 2.24.0 Jan 20 15:09:09.598104 ignition[896]: Stage: disks Jan 20 15:09:09.598483 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 20 15:09:09.598494 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:09:09.600940 ignition[896]: disks: disks passed Jan 20 15:09:09.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:09.607473 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 15:09:09.600991 ignition[896]: Ignition finished successfully Jan 20 15:09:09.611217 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 15:09:09.614984 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 15:09:09.621292 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 15:09:09.628398 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 15:09:09.634312 systemd[1]: Reached target basic.target - Basic System. Jan 20 15:09:09.647511 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 15:09:09.836158 systemd-fsck[906]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 15:09:09.842958 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 15:09:09.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:09.853348 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 15:09:10.051527 kernel: EXT4-fs (vda9): mounted filesystem 258d228c-90db-4a07-8ba3-cf3df974c261 r/w with ordered data mode. Quota mode: none. Jan 20 15:09:10.056194 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 15:09:10.061060 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 15:09:10.069315 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 15:09:10.075192 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 15:09:10.081472 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 15:09:10.081590 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 15:09:10.081632 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 15:09:10.115360 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 15:09:10.121449 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 15:09:10.135510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 20 15:09:10.135534 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:09:10.135546 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:09:10.144673 kernel: BTRFS info (device vda6): turning on async discard Jan 20 15:09:10.144712 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 15:09:10.146664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 15:09:10.469816 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 15:09:10.493186 kernel: kauditd_printk_skb: 17 callbacks suppressed Jan 20 15:09:10.493213 kernel: audit: type=1130 audit(1768921750.472:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:10.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:10.475515 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 15:09:10.500364 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 15:09:10.516175 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 15:09:10.526389 kernel: BTRFS info (device vda6): last unmount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:09:10.560480 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 15:09:10.572560 kernel: audit: type=1130 audit(1768921750.562:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:10.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:10.614715 ignition[1010]: INFO : Ignition 2.24.0 Jan 20 15:09:10.614715 ignition[1010]: INFO : Stage: mount Jan 20 15:09:10.623239 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 15:09:10.623239 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:09:10.634759 ignition[1010]: INFO : mount: mount passed Jan 20 15:09:10.634759 ignition[1010]: INFO : Ignition finished successfully Jan 20 15:09:10.649543 kernel: audit: type=1130 audit(1768921750.638:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:10.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:10.637322 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 15:09:10.641251 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 15:09:11.056434 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 15:09:11.101229 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Jan 20 15:09:11.108697 kernel: BTRFS info (device vda6): first mount of filesystem 942b9c6f-515e-4c56-bf89-1c8ad8ddeab7 Jan 20 15:09:11.108732 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 15:09:11.117227 kernel: BTRFS info (device vda6): turning on async discard Jan 20 15:09:11.117469 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 15:09:11.120093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 15:09:11.193170 ignition[1040]: INFO : Ignition 2.24.0 Jan 20 15:09:11.193170 ignition[1040]: INFO : Stage: files Jan 20 15:09:11.198642 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 15:09:11.198642 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:09:11.198642 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Jan 20 15:09:11.285683 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 15:09:11.285683 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 15:09:11.296020 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 15:09:11.296020 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 15:09:11.296020 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 15:09:11.294704 unknown[1040]: wrote ssh authorized keys file for user: core Jan 20 15:09:11.312296 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 15:09:11.312296 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 15:09:11.393457 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 15:09:11.518666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 15:09:11.525391 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:09:11.597236 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:09:11.597236 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:09:11.597236 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 15:09:12.025911 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 15:09:12.457237 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 15:09:12.457237 ignition[1040]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 15:09:12.469198 ignition[1040]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 15:09:12.475763 ignition[1040]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 15:09:12.475763 ignition[1040]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 15:09:12.475763 ignition[1040]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 15:09:12.475763 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 15:09:12.475763 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 15:09:12.475763 ignition[1040]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 15:09:12.475763 ignition[1040]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 15:09:12.540415 ignition[1040]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 15:09:12.546589 ignition[1040]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 15:09:12.551682 ignition[1040]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 15:09:12.551682 ignition[1040]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 15:09:12.551682 ignition[1040]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 15:09:12.551682 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 15:09:12.551682 ignition[1040]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 15:09:12.551682 ignition[1040]: INFO : files: files passed Jan 20 15:09:12.551682 ignition[1040]: INFO : Ignition finished successfully Jan 20 15:09:12.581698 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 15:09:12.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.589906 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 15:09:12.602957 kernel: audit: type=1130 audit(1768921752.587:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.610349 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 15:09:12.618014 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 15:09:12.618233 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 15:09:12.644862 kernel: audit: type=1130 audit(1768921752.624:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.644895 kernel: audit: type=1131 audit(1768921752.625:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.638999 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 15:09:12.659232 kernel: audit: type=1130 audit(1768921752.649:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.650220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 15:09:12.669729 initrd-setup-root-after-ignition[1071]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 15:09:12.674409 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 15:09:12.674409 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 15:09:12.687106 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 15:09:12.674844 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 15:09:12.768911 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 15:09:12.769087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 15:09:12.795011 kernel: audit: type=1130 audit(1768921752.776:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.795046 kernel: audit: type=1131 audit(1768921752.776:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.776505 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 15:09:12.796155 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 15:09:12.806954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 15:09:12.811823 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 15:09:12.866611 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 15:09:12.880019 kernel: audit: type=1130 audit(1768921752.867:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.869726 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 15:09:12.903054 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 15:09:12.903309 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 15:09:12.906059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 15:09:12.917070 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 15:09:12.925861 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 15:09:12.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:12.926065 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 15:09:12.936017 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 15:09:12.942536 systemd[1]: Stopped target basic.target - Basic System. Jan 20 15:09:12.948443 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 15:09:12.950317 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 15:09:12.959103 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 15:09:12.967039 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 15:09:12.981362 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 15:09:12.983939 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 15:09:12.991764 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 15:09:13.003940 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 15:09:13.012440 systemd[1]: Stopped target swap.target - Swaps. Jan 20 15:09:13.016074 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 15:09:13.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.016349 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 15:09:13.029386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 15:09:13.035513 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 15:09:13.041834 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 15:09:13.048333 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 15:09:13.049919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 15:09:13.050048 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 15:09:13.063720 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 15:09:13.063868 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 15:09:13.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.065932 systemd[1]: Stopped target paths.target - Path Units. Jan 20 15:09:13.074391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 15:09:13.089255 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 15:09:13.099508 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 15:09:13.100931 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 15:09:13.108645 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 15:09:13.108791 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 15:09:13.115942 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 15:09:13.116091 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 15:09:13.118485 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 15:09:13.118619 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 15:09:13.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.131166 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 15:09:13.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.131409 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 15:09:13.138197 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 15:09:13.138451 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 15:09:13.145890 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 15:09:13.167738 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 15:09:13.170663 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 15:09:13.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.170886 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 15:09:13.179048 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 15:09:13.179352 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 15:09:13.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.202189 ignition[1097]: INFO : Ignition 2.24.0 Jan 20 15:09:13.202189 ignition[1097]: INFO : Stage: umount Jan 20 15:09:13.202189 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 15:09:13.202189 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 15:09:13.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.189666 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 15:09:13.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.228421 ignition[1097]: INFO : umount: umount passed Jan 20 15:09:13.228421 ignition[1097]: INFO : Ignition finished successfully Jan 20 15:09:13.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.189764 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 15:09:13.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.199205 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 15:09:13.204645 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 15:09:13.204764 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 15:09:13.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.206640 systemd[1]: Stopped target network.target - Network. Jan 20 15:09:13.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.216012 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 15:09:13.216090 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 15:09:13.225061 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 15:09:13.225204 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 15:09:13.231221 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 15:09:13.231339 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 15:09:13.240331 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 15:09:13.240423 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 15:09:13.248490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 15:09:13.259406 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 15:09:13.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.263333 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 15:09:13.263476 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 15:09:13.268848 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 15:09:13.268991 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 15:09:13.270232 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 15:09:13.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.270336 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 15:09:13.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.282907 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 15:09:13.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.283189 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 15:09:13.325860 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 15:09:13.326052 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 15:09:13.335236 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 15:09:13.338203 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 15:09:13.338247 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 15:09:13.349047 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 15:09:13.357977 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 15:09:13.358049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 15:09:13.364174 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 15:09:13.364259 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 15:09:13.424000 audit: BPF prog-id=6 op=UNLOAD Jan 20 15:09:13.372944 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 15:09:13.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.373014 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 15:09:13.435000 audit: BPF prog-id=9 op=UNLOAD Jan 20 15:09:13.379010 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 15:09:13.425039 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 15:09:13.425404 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 15:09:13.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.430870 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 15:09:13.430969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 15:09:13.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.437906 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 15:09:13.437959 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 15:09:13.444253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 15:09:13.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.444363 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 15:09:13.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.452758 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 15:09:13.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.452812 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 15:09:13.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.460841 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 15:09:13.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.460897 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 15:09:13.474221 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 15:09:13.480167 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 15:09:13.480230 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 15:09:13.481998 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 15:09:13.482049 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 15:09:13.488251 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 15:09:13.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.488344 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 15:09:13.497508 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 15:09:13.497560 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 15:09:13.502019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 15:09:13.502071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:09:13.535082 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 15:09:13.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:13.535463 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 15:09:13.560518 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 15:09:13.560671 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 15:09:13.566836 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 15:09:13.573639 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 15:09:13.608826 systemd[1]: Switching root. Jan 20 15:09:13.652985 systemd-journald[316]: Journal stopped Jan 20 15:09:15.387720 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Jan 20 15:09:15.387800 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 15:09:15.387820 kernel: SELinux: policy capability open_perms=1 Jan 20 15:09:15.387832 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 15:09:15.387868 kernel: SELinux: policy capability always_check_network=0 Jan 20 15:09:15.387882 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 15:09:15.387893 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 15:09:15.387936 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 15:09:15.387952 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 15:09:15.387967 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 15:09:15.387978 systemd[1]: Successfully loaded SELinux policy in 81.759ms. Jan 20 15:09:15.388014 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.206ms. Jan 20 15:09:15.388028 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 15:09:15.388071 systemd[1]: Detected virtualization kvm. Jan 20 15:09:15.388088 systemd[1]: Detected architecture x86-64. Jan 20 15:09:15.388100 systemd[1]: Detected first boot. Jan 20 15:09:15.388166 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 15:09:15.388183 zram_generator::config[1143]: No configuration found. Jan 20 15:09:15.388206 kernel: Guest personality initialized and is inactive Jan 20 15:09:15.388217 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 15:09:15.388229 kernel: Initialized host personality Jan 20 15:09:15.388244 kernel: NET: Registered PF_VSOCK protocol family Jan 20 15:09:15.388304 systemd[1]: Populated /etc with preset unit settings. Jan 20 15:09:15.388317 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 15:09:15.388329 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 15:09:15.388340 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 15:09:15.388360 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 15:09:15.388371 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 15:09:15.388386 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 15:09:15.388398 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 15:09:15.388409 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 15:09:15.388421 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 15:09:15.388433 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 15:09:15.388444 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 15:09:15.388458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 15:09:15.388470 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 15:09:15.388482 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 15:09:15.388493 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 15:09:15.388505 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 15:09:15.388517 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 15:09:15.388528 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 15:09:15.388542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 15:09:15.388554 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 15:09:15.388568 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 15:09:15.388584 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 15:09:15.388598 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 15:09:15.388610 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 15:09:15.388624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 15:09:15.388636 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 15:09:15.388648 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 15:09:15.388695 systemd[1]: Reached target slices.target - Slice Units. Jan 20 15:09:15.388707 systemd[1]: Reached target swap.target - Swaps. Jan 20 15:09:15.388738 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 15:09:15.388750 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 15:09:15.388762 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 15:09:15.388773 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 15:09:15.388785 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 15:09:15.388797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 15:09:15.388808 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 15:09:15.388822 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 15:09:15.388833 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 15:09:15.388845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 15:09:15.388857 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 15:09:15.388869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 15:09:15.388881 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 15:09:15.388892 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 15:09:15.388905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:15.388917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 15:09:15.388929 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 15:09:15.388947 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 15:09:15.388968 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 15:09:15.388982 systemd[1]: Reached target machines.target - Containers. Jan 20 15:09:15.388997 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 15:09:15.389009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:09:15.389020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 15:09:15.389032 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 15:09:15.389043 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 15:09:15.389058 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 15:09:15.389070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 15:09:15.389084 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 15:09:15.389095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 15:09:15.389107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 15:09:15.389166 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 15:09:15.389207 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 15:09:15.389229 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 15:09:15.389251 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 15:09:15.389334 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:09:15.389358 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 15:09:15.389375 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 15:09:15.389395 kernel: ACPI: bus type drm_connector registered Jan 20 15:09:15.389468 kernel: fuse: init (API version 7.41) Jan 20 15:09:15.389488 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 15:09:15.389507 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 15:09:15.389527 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 15:09:15.389544 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 15:09:15.389721 systemd-journald[1218]: Collecting audit messages is enabled. Jan 20 15:09:15.389807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:15.389830 systemd-journald[1218]: Journal started Jan 20 15:09:15.389860 systemd-journald[1218]: Runtime Journal (/run/log/journal/0dec5f1e29cd46a284e133387715869b) is 6M, max 48M, 42M free. Jan 20 15:09:14.901000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 15:09:15.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.299000 audit: BPF prog-id=14 op=UNLOAD Jan 20 15:09:15.299000 audit: BPF prog-id=13 op=UNLOAD Jan 20 15:09:15.300000 audit: BPF prog-id=15 op=LOAD Jan 20 15:09:15.301000 audit: BPF prog-id=16 op=LOAD Jan 20 15:09:15.301000 audit: BPF prog-id=17 op=LOAD Jan 20 15:09:15.385000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 15:09:15.385000 audit[1218]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcec392a20 a2=4000 a3=0 items=0 ppid=1 pid=1218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:15.385000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 15:09:14.575745 systemd[1]: Queued start job for default target multi-user.target. Jan 20 15:09:14.592085 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 15:09:14.593091 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 15:09:14.593834 systemd[1]: systemd-journald.service: Consumed 1.174s CPU time. Jan 20 15:09:15.400917 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 15:09:15.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.402886 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 15:09:15.406515 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 15:09:15.410191 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 15:09:15.413427 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 15:09:15.417933 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 15:09:15.423491 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 15:09:15.427246 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 15:09:15.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.431499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 15:09:15.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.438413 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 15:09:15.438663 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 15:09:15.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.446087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 15:09:15.446601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 15:09:15.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.453929 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 15:09:15.454681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 15:09:15.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.459539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 15:09:15.459901 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 15:09:15.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.464785 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 15:09:15.466944 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 15:09:15.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.473725 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 15:09:15.475020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 15:09:15.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.482916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 15:09:15.483956 kernel: kauditd_printk_skb: 77 callbacks suppressed Jan 20 15:09:15.484551 kernel: audit: type=1130 audit(1768921755.481:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.500624 kernel: audit: type=1131 audit(1768921755.481:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.506249 kernel: audit: type=1130 audit(1768921755.504:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.514575 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 15:09:15.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.526682 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 15:09:15.536796 kernel: audit: type=1130 audit(1768921755.524:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.536872 kernel: audit: type=1130 audit(1768921755.536:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.544367 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 15:09:15.549211 kernel: audit: type=1130 audit(1768921755.548:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.558063 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 15:09:15.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.570209 kernel: audit: type=1130 audit(1768921755.561:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.571482 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 15:09:15.575639 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 15:09:15.581059 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 15:09:15.585852 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 15:09:15.587034 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 15:09:15.587087 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 15:09:15.593438 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 15:09:15.597195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:09:15.597368 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:09:15.609001 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 15:09:15.613491 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 15:09:15.616972 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 15:09:15.619791 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 15:09:15.629239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 15:09:15.633332 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 15:09:15.654826 systemd-journald[1218]: Time spent on flushing to /var/log/journal/0dec5f1e29cd46a284e133387715869b is 70.752ms for 1193 entries. Jan 20 15:09:15.654826 systemd-journald[1218]: System Journal (/var/log/journal/0dec5f1e29cd46a284e133387715869b) is 8M, max 163.5M, 155.5M free. Jan 20 15:09:15.745541 systemd-journald[1218]: Received client request to flush runtime journal. Jan 20 15:09:15.745615 kernel: audit: type=1130 audit(1768921755.673:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.642346 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 15:09:15.647475 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 15:09:15.653091 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 15:09:15.659873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 15:09:15.664817 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 15:09:15.738848 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 15:09:15.788973 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 15:09:15.798838 kernel: loop1: detected capacity change from 0 to 171112 Jan 20 15:09:15.794321 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 15:09:15.810602 kernel: audit: type=1130 audit(1768921755.801:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.810660 kernel: loop1: p1 p2 p3 Jan 20 15:09:15.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.819256 kernel: audit: type=1130 audit(1768921755.814:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.811452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 15:09:15.857413 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 20 15:09:15.857429 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 20 15:09:15.864636 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 15:09:15.866785 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 15:09:15.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.871569 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 15:09:15.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:15.878102 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 15:09:15.977211 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Jan 20 15:09:15.998211 kernel: loop2: detected capacity change from 0 to 375256 Jan 20 15:09:16.003262 kernel: loop2: p1 p2 p3 Jan 20 15:09:16.030671 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 15:09:16.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.038000 audit: BPF prog-id=18 op=LOAD Jan 20 15:09:16.038000 audit: BPF prog-id=19 op=LOAD Jan 20 15:09:16.039000 audit: BPF prog-id=20 op=LOAD Jan 20 15:09:16.040584 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 15:09:16.046000 audit: BPF prog-id=21 op=LOAD Jan 20 15:09:16.052212 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Jan 20 15:09:16.050431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 15:09:16.057421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 15:09:16.065000 audit: BPF prog-id=22 op=LOAD Jan 20 15:09:16.065000 audit: BPF prog-id=23 op=LOAD Jan 20 15:09:16.065000 audit: BPF prog-id=24 op=LOAD Jan 20 15:09:16.068495 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 15:09:16.074000 audit: BPF prog-id=25 op=LOAD Jan 20 15:09:16.075000 audit: BPF prog-id=26 op=LOAD Jan 20 15:09:16.075000 audit: BPF prog-id=27 op=LOAD Jan 20 15:09:16.078389 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 15:09:16.097209 kernel: loop3: detected capacity change from 0 to 224512 Jan 20 15:09:16.115826 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Jan 20 15:09:16.115856 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Jan 20 15:09:16.126525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 15:09:16.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.150185 kernel: loop4: detected capacity change from 0 to 171112 Jan 20 15:09:16.152208 kernel: loop4: p1 p2 p3 Jan 20 15:09:16.156603 systemd-nsresourced[1289]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 15:09:16.157915 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 15:09:16.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.231583 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:09:16.242576 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 15:09:16.243169 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Jan 20 15:09:16.243213 kernel: device-mapper: ioctl: error adding target to table Jan 20 15:09:16.240306 (sd-merge)[1295]: device-mapper: reload ioctl on 8c7c96915202989b4a0dcbd1acd80ba2f75612a91a267e360f9baafdceea3d6f-verity (253:1) failed: Invalid argument Jan 20 15:09:16.303992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 15:09:16.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.316312 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:09:16.392379 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Jan 20 15:09:16.393703 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 15:09:16.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.414472 systemd-resolved[1287]: Positive Trust Anchors: Jan 20 15:09:16.414510 systemd-resolved[1287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 15:09:16.414515 systemd-resolved[1287]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 15:09:16.414542 systemd-resolved[1287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 15:09:16.425590 systemd-resolved[1287]: Defaulting to hostname 'linux'. Jan 20 15:09:16.427818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 15:09:16.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.431527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 15:09:16.667479 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 15:09:16.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.672000 audit: BPF prog-id=8 op=UNLOAD Jan 20 15:09:16.672000 audit: BPF prog-id=7 op=UNLOAD Jan 20 15:09:16.674000 audit: BPF prog-id=28 op=LOAD Jan 20 15:09:16.674000 audit: BPF prog-id=29 op=LOAD Jan 20 15:09:16.675906 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 15:09:16.757476 systemd-udevd[1314]: Using default interface naming scheme 'v257'. Jan 20 15:09:16.789092 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 15:09:16.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.795000 audit: BPF prog-id=30 op=LOAD Jan 20 15:09:16.797492 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 15:09:16.959385 systemd-networkd[1319]: lo: Link UP Jan 20 15:09:16.959423 systemd-networkd[1319]: lo: Gained carrier Jan 20 15:09:16.961486 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 15:09:16.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:16.965528 systemd[1]: Reached target network.target - Network. Jan 20 15:09:16.973329 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 15:09:16.980346 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 15:09:16.993560 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 15:09:16.998367 systemd-networkd[1319]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:09:16.999221 systemd-networkd[1319]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 15:09:17.000723 systemd-networkd[1319]: eth0: Link UP Jan 20 15:09:17.000925 systemd-networkd[1319]: eth0: Gained carrier Jan 20 15:09:17.000941 systemd-networkd[1319]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 15:09:17.012183 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 15:09:17.025665 systemd-networkd[1319]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 15:09:17.030197 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 15:09:17.043968 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 15:09:17.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:17.054537 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 15:09:17.058299 kernel: ACPI: button: Power Button [PWRF] Jan 20 15:09:17.064200 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 15:09:17.073550 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 15:09:17.074054 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 15:09:17.078196 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 15:09:17.124494 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 15:09:17.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:17.279804 kernel: hrtimer: interrupt took 4504492 ns Jan 20 15:09:17.583570 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:09:17.598016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 15:09:17.598459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:09:17.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:17.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:17.606319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 15:09:17.770345 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Jan 20 15:09:17.770533 kernel: loop5: detected capacity change from 0 to 375256 Jan 20 15:09:17.770569 kernel: loop5: p1 p2 p3 Jan 20 15:09:17.861848 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:09:17.870673 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Jan 20 15:09:17.871186 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Jan 20 15:09:17.871243 kernel: device-mapper: ioctl: error adding target to table Jan 20 15:09:17.857579 (sd-merge)[1295]: device-mapper: reload ioctl on 843577122f2bcae09e086c1955c04b6b28388e52152c2016187e408266e84aa6-verity (253:2) failed: Invalid argument Jan 20 15:09:17.897249 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 20 15:09:17.902894 kernel: kvm_amd: TSC scaling supported Jan 20 15:09:17.902950 kernel: kvm_amd: Nested Virtualization enabled Jan 20 15:09:17.902977 kernel: kvm_amd: Nested Paging enabled Jan 20 15:09:17.904559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 15:09:17.906188 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 15:09:17.907370 kernel: kvm_amd: PMU virtualization is disabled Jan 20 15:09:17.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:17.989520 kernel: EDAC MC: Ver: 3.0.0 Jan 20 15:09:17.999428 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Jan 20 15:09:18.005221 kernel: loop6: detected capacity change from 0 to 224512 Jan 20 15:09:18.027454 (sd-merge)[1295]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 15:09:18.036859 (sd-merge)[1295]: Merged extensions into '/usr'. Jan 20 15:09:18.041844 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 15:09:18.041880 systemd[1]: Reloading... Jan 20 15:09:18.250364 zram_generator::config[1440]: No configuration found. Jan 20 15:09:18.695650 systemd[1]: Reloading finished in 652 ms. Jan 20 15:09:18.733836 systemd-networkd[1319]: eth0: Gained IPv6LL Jan 20 15:09:18.740006 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 15:09:18.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:18.745539 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 15:09:18.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:18.757915 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 15:09:18.774542 systemd[1]: Starting ensure-sysext.service... Jan 20 15:09:18.779924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 15:09:18.786000 audit: BPF prog-id=31 op=LOAD Jan 20 15:09:18.786000 audit: BPF prog-id=18 op=UNLOAD Jan 20 15:09:18.786000 audit: BPF prog-id=32 op=LOAD Jan 20 15:09:18.786000 audit: BPF prog-id=33 op=LOAD Jan 20 15:09:18.786000 audit: BPF prog-id=19 op=UNLOAD Jan 20 15:09:18.786000 audit: BPF prog-id=20 op=UNLOAD Jan 20 15:09:18.787000 audit: BPF prog-id=34 op=LOAD Jan 20 15:09:18.787000 audit: BPF prog-id=22 op=UNLOAD Jan 20 15:09:18.787000 audit: BPF prog-id=35 op=LOAD Jan 20 15:09:18.787000 audit: BPF prog-id=36 op=LOAD Jan 20 15:09:18.788000 audit: BPF prog-id=23 op=UNLOAD Jan 20 15:09:18.788000 audit: BPF prog-id=24 op=UNLOAD Jan 20 15:09:18.788000 audit: BPF prog-id=37 op=LOAD Jan 20 15:09:18.789000 audit: BPF prog-id=25 op=UNLOAD Jan 20 15:09:18.789000 audit: BPF prog-id=38 op=LOAD Jan 20 15:09:18.789000 audit: BPF prog-id=39 op=LOAD Jan 20 15:09:18.789000 audit: BPF prog-id=26 op=UNLOAD Jan 20 15:09:18.789000 audit: BPF prog-id=27 op=UNLOAD Jan 20 15:09:18.791000 audit: BPF prog-id=40 op=LOAD Jan 20 15:09:18.791000 audit: BPF prog-id=15 op=UNLOAD Jan 20 15:09:18.791000 audit: BPF prog-id=41 op=LOAD Jan 20 15:09:18.791000 audit: BPF prog-id=42 op=LOAD Jan 20 15:09:18.791000 audit: BPF prog-id=16 op=UNLOAD Jan 20 15:09:18.791000 audit: BPF prog-id=17 op=UNLOAD Jan 20 15:09:18.792000 audit: BPF prog-id=43 op=LOAD Jan 20 15:09:18.792000 audit: BPF prog-id=21 op=UNLOAD Jan 20 15:09:18.793000 audit: BPF prog-id=44 op=LOAD Jan 20 15:09:18.815000 audit: BPF prog-id=45 op=LOAD Jan 20 15:09:18.815000 audit: BPF prog-id=28 op=UNLOAD Jan 20 15:09:18.815000 audit: BPF prog-id=29 op=UNLOAD Jan 20 15:09:18.816000 audit: BPF prog-id=46 op=LOAD Jan 20 15:09:18.816000 audit: BPF prog-id=30 op=UNLOAD Jan 20 15:09:18.824397 systemd[1]: Reload requested from client PID 1453 ('systemctl') (unit ensure-sysext.service)... Jan 20 15:09:18.824449 systemd[1]: Reloading... Jan 20 15:09:18.841621 systemd-tmpfiles[1454]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 15:09:18.841687 systemd-tmpfiles[1454]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 15:09:18.841980 systemd-tmpfiles[1454]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 15:09:18.843704 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 20 15:09:18.843823 systemd-tmpfiles[1454]: ACLs are not supported, ignoring. Jan 20 15:09:18.851854 systemd-tmpfiles[1454]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 15:09:18.851891 systemd-tmpfiles[1454]: Skipping /boot Jan 20 15:09:18.870552 systemd-tmpfiles[1454]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 15:09:18.870565 systemd-tmpfiles[1454]: Skipping /boot Jan 20 15:09:18.929224 zram_generator::config[1499]: No configuration found. Jan 20 15:09:19.230538 systemd[1]: Reloading finished in 405 ms. Jan 20 15:09:19.263000 audit: BPF prog-id=47 op=LOAD Jan 20 15:09:19.263000 audit: BPF prog-id=40 op=UNLOAD Jan 20 15:09:19.263000 audit: BPF prog-id=48 op=LOAD Jan 20 15:09:19.263000 audit: BPF prog-id=49 op=LOAD Jan 20 15:09:19.263000 audit: BPF prog-id=41 op=UNLOAD Jan 20 15:09:19.263000 audit: BPF prog-id=42 op=UNLOAD Jan 20 15:09:19.264000 audit: BPF prog-id=50 op=LOAD Jan 20 15:09:19.264000 audit: BPF prog-id=37 op=UNLOAD Jan 20 15:09:19.264000 audit: BPF prog-id=51 op=LOAD Jan 20 15:09:19.264000 audit: BPF prog-id=52 op=LOAD Jan 20 15:09:19.264000 audit: BPF prog-id=38 op=UNLOAD Jan 20 15:09:19.264000 audit: BPF prog-id=39 op=UNLOAD Jan 20 15:09:19.264000 audit: BPF prog-id=53 op=LOAD Jan 20 15:09:19.264000 audit: BPF prog-id=34 op=UNLOAD Jan 20 15:09:19.264000 audit: BPF prog-id=54 op=LOAD Jan 20 15:09:19.265000 audit: BPF prog-id=55 op=LOAD Jan 20 15:09:19.265000 audit: BPF prog-id=35 op=UNLOAD Jan 20 15:09:19.265000 audit: BPF prog-id=36 op=UNLOAD Jan 20 15:09:19.266000 audit: BPF prog-id=56 op=LOAD Jan 20 15:09:19.266000 audit: BPF prog-id=46 op=UNLOAD Jan 20 15:09:19.267000 audit: BPF prog-id=57 op=LOAD Jan 20 15:09:19.274000 audit: BPF prog-id=43 op=UNLOAD Jan 20 15:09:19.275000 audit: BPF prog-id=58 op=LOAD Jan 20 15:09:19.275000 audit: BPF prog-id=59 op=LOAD Jan 20 15:09:19.275000 audit: BPF prog-id=44 op=UNLOAD Jan 20 15:09:19.275000 audit: BPF prog-id=45 op=UNLOAD Jan 20 15:09:19.276000 audit: BPF prog-id=60 op=LOAD Jan 20 15:09:19.276000 audit: BPF prog-id=31 op=UNLOAD Jan 20 15:09:19.276000 audit: BPF prog-id=61 op=LOAD Jan 20 15:09:19.276000 audit: BPF prog-id=62 op=LOAD Jan 20 15:09:19.276000 audit: BPF prog-id=32 op=UNLOAD Jan 20 15:09:19.277000 audit: BPF prog-id=33 op=UNLOAD Jan 20 15:09:19.281463 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 15:09:19.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.296919 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 15:09:19.302455 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 15:09:19.327426 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 15:09:19.331996 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 15:09:19.337692 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 15:09:19.346593 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:19.346841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:09:19.349575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 15:09:19.350000 audit[1535]: SYSTEM_BOOT pid=1535 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.355711 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 15:09:19.367532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 15:09:19.371320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:09:19.371525 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:09:19.371613 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:09:19.371687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:19.372884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 15:09:19.373698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 15:09:19.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.387723 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:19.390374 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:09:19.392465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 15:09:19.396263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:09:19.396548 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:09:19.396729 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:09:19.396861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:19.399053 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 15:09:19.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.407531 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 15:09:19.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.412314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 15:09:19.412592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 15:09:19.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.417916 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 15:09:19.418416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 15:09:19.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:19.419000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 15:09:19.419000 audit[1554]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd959e9cb0 a2=420 a3=0 items=0 ppid=1525 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:19.419000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 15:09:19.419809 augenrules[1554]: No rules Jan 20 15:09:19.422827 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 15:09:19.423377 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 15:09:19.427595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 15:09:19.427891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 15:09:19.442728 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:19.444466 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 15:09:19.447844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 15:09:19.451391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 15:09:19.606839 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 15:09:19.610815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 15:09:19.616382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 15:09:19.620413 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 15:09:19.620590 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 15:09:19.620678 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 15:09:19.620770 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 15:09:19.628458 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 15:09:19.633029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 15:09:19.633424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 15:09:19.637922 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 15:09:19.639797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 15:09:19.643713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 15:09:19.643922 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 15:09:19.650886 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 15:09:19.651256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 15:09:19.652458 augenrules[1565]: /sbin/augenrules: No change Jan 20 15:09:19.658630 systemd[1]: Finished ensure-sysext.service. Jan 20 15:09:19.662000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 15:09:19.662000 audit[1589]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe6826fef0 a2=420 a3=0 items=0 ppid=1565 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:19.662000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 15:09:19.664000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 15:09:19.664000 audit[1589]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe68272380 a2=420 a3=0 items=0 ppid=1565 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:19.664000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 15:09:19.664479 augenrules[1589]: No rules Jan 20 15:09:19.665983 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 15:09:19.666492 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 15:09:19.670899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 15:09:19.671012 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 15:09:19.673059 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 15:09:19.676873 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 15:09:19.765382 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 15:09:21.303088 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 15:09:21.303113 systemd-timesyncd[1598]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 15:09:21.303124 systemd-resolved[1287]: Clock change detected. Flushing caches. Jan 20 15:09:21.303163 systemd-timesyncd[1598]: Initial clock synchronization to Tue 2026-01-20 15:09:21.302971 UTC. Jan 20 15:09:21.746712 ldconfig[1527]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 15:09:21.754696 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 15:09:21.760984 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 15:09:21.790549 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 15:09:21.795638 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 15:09:21.801313 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 15:09:21.807582 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 15:09:21.814417 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 15:09:21.820587 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 15:09:21.824423 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 15:09:21.828312 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 15:09:21.832549 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 15:09:21.836197 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 15:09:21.839991 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 15:09:21.840065 systemd[1]: Reached target paths.target - Path Units. Jan 20 15:09:21.842818 systemd[1]: Reached target timers.target - Timer Units. Jan 20 15:09:21.847652 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 15:09:21.853098 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 15:09:21.858240 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 15:09:21.862195 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 15:09:21.866169 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 15:09:21.872639 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 15:09:21.876144 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 15:09:21.880465 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 15:09:21.884616 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 15:09:21.887568 systemd[1]: Reached target basic.target - Basic System. Jan 20 15:09:21.890479 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 15:09:21.890529 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 15:09:21.891823 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 15:09:21.896135 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 15:09:21.905944 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 15:09:21.909995 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 15:09:21.914467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 15:09:21.918739 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 15:09:21.921779 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 15:09:21.922994 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 15:09:21.928122 jq[1611]: false Jan 20 15:09:21.928587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:09:21.935388 google_oslogin_nss_cache[1613]: oslogin_cache_refresh[1613]: Refreshing passwd entry cache Jan 20 15:09:21.935594 oslogin_cache_refresh[1613]: Refreshing passwd entry cache Jan 20 15:09:21.936046 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 15:09:21.941999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 15:09:21.949176 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 15:09:21.955311 google_oslogin_nss_cache[1613]: oslogin_cache_refresh[1613]: Failure getting users, quitting Jan 20 15:09:21.955311 google_oslogin_nss_cache[1613]: oslogin_cache_refresh[1613]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 15:09:21.955276 oslogin_cache_refresh[1613]: Failure getting users, quitting Jan 20 15:09:21.955488 google_oslogin_nss_cache[1613]: oslogin_cache_refresh[1613]: Refreshing group entry cache Jan 20 15:09:21.955305 oslogin_cache_refresh[1613]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 15:09:21.955371 oslogin_cache_refresh[1613]: Refreshing group entry cache Jan 20 15:09:21.957079 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 15:09:21.957691 extend-filesystems[1612]: Found /dev/vda6 Jan 20 15:09:21.966922 extend-filesystems[1612]: Found /dev/vda9 Jan 20 15:09:21.971324 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 15:09:21.976152 google_oslogin_nss_cache[1613]: oslogin_cache_refresh[1613]: Failure getting groups, quitting Jan 20 15:09:21.976152 google_oslogin_nss_cache[1613]: oslogin_cache_refresh[1613]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 15:09:21.976132 oslogin_cache_refresh[1613]: Failure getting groups, quitting Jan 20 15:09:21.976148 oslogin_cache_refresh[1613]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 15:09:21.978510 extend-filesystems[1612]: Checking size of /dev/vda9 Jan 20 15:09:21.982057 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 15:09:21.985838 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 15:09:21.986389 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 15:09:21.987183 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 15:09:21.990985 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 15:09:22.002968 extend-filesystems[1612]: Resized partition /dev/vda9 Jan 20 15:09:22.006775 extend-filesystems[1645]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 15:09:22.025812 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 15:09:22.026142 jq[1638]: true Jan 20 15:09:22.009164 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 15:09:22.014973 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 15:09:22.015378 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 15:09:22.015742 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 15:09:22.016182 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 15:09:22.028983 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 15:09:22.029444 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 15:09:22.038380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 15:09:22.044765 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 15:09:22.045207 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 15:09:22.067915 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 15:09:22.077748 update_engine[1636]: I20260120 15:09:22.077554 1636 main.cc:92] Flatcar Update Engine starting Jan 20 15:09:22.091207 jq[1657]: true Jan 20 15:09:22.081329 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 15:09:22.081623 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 15:09:22.090660 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 15:09:22.093133 extend-filesystems[1645]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 15:09:22.093133 extend-filesystems[1645]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 15:09:22.093133 extend-filesystems[1645]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 15:09:22.116175 extend-filesystems[1612]: Resized filesystem in /dev/vda9 Jan 20 15:09:22.094258 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 15:09:22.119807 tar[1656]: linux-amd64/LICENSE Jan 20 15:09:22.095952 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 15:09:22.120489 tar[1656]: linux-amd64/helm Jan 20 15:09:22.181216 bash[1700]: Updated "/home/core/.ssh/authorized_keys" Jan 20 15:09:22.182474 systemd-logind[1631]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 15:09:22.182497 systemd-logind[1631]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 15:09:22.183467 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 15:09:22.188806 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 15:09:22.189053 systemd-logind[1631]: New seat seat0. Jan 20 15:09:22.194465 dbus-daemon[1609]: [system] SELinux support is enabled Jan 20 15:09:22.195150 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 15:09:22.206994 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 15:09:22.210404 update_engine[1636]: I20260120 15:09:22.209059 1636 update_check_scheduler.cc:74] Next update check in 9m36s Jan 20 15:09:22.211152 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 15:09:22.217346 dbus-daemon[1609]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 15:09:22.211182 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 15:09:22.215052 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 15:09:22.215073 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 15:09:22.219103 systemd[1]: Started update-engine.service - Update Engine. Jan 20 15:09:22.229073 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 15:09:22.365207 locksmithd[1704]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 15:09:22.446444 containerd[1658]: time="2026-01-20T15:09:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 15:09:22.449476 containerd[1658]: time="2026-01-20T15:09:22.449335682Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 15:09:22.463559 containerd[1658]: time="2026-01-20T15:09:22.463525599Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.997µs" Jan 20 15:09:22.464215 containerd[1658]: time="2026-01-20T15:09:22.463631125Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 15:09:22.464215 containerd[1658]: time="2026-01-20T15:09:22.463670018Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 15:09:22.464215 containerd[1658]: time="2026-01-20T15:09:22.463682301Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 15:09:22.464215 containerd[1658]: time="2026-01-20T15:09:22.463837862Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 15:09:22.464215 containerd[1658]: time="2026-01-20T15:09:22.463915036Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 15:09:22.464215 containerd[1658]: time="2026-01-20T15:09:22.463981620Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 15:09:22.464215 containerd[1658]: time="2026-01-20T15:09:22.463992551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 15:09:22.464558 containerd[1658]: time="2026-01-20T15:09:22.464537568Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 15:09:22.464603 containerd[1658]: time="2026-01-20T15:09:22.464591770Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 15:09:22.464655 containerd[1658]: time="2026-01-20T15:09:22.464642635Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 15:09:22.464693 containerd[1658]: time="2026-01-20T15:09:22.464683181Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 15:09:22.465498 containerd[1658]: time="2026-01-20T15:09:22.465478916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 15:09:22.465644 containerd[1658]: time="2026-01-20T15:09:22.465627564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 15:09:22.466146 containerd[1658]: time="2026-01-20T15:09:22.466127066Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 15:09:22.466225 containerd[1658]: time="2026-01-20T15:09:22.466211694Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 15:09:22.466283 containerd[1658]: time="2026-01-20T15:09:22.466271466Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 15:09:22.467180 containerd[1658]: time="2026-01-20T15:09:22.466692583Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 15:09:22.467781 containerd[1658]: time="2026-01-20T15:09:22.467763873Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 15:09:22.467957 containerd[1658]: time="2026-01-20T15:09:22.467940183Z" level=info msg="metadata content store policy set" policy=shared Jan 20 15:09:22.475142 sshd_keygen[1635]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476180159Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476238639Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476303560Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476314760Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476345387Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476357461Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476374762Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476388929Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476399950Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476411762Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476422422Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476448791Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476467085Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 15:09:22.476782 containerd[1658]: time="2026-01-20T15:09:22.476481492Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476604271Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476628917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476641110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476651239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476661808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476671587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476682027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476697725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476710419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476721079Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 15:09:22.477300 containerd[1658]: time="2026-01-20T15:09:22.476730867Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 15:09:22.477956 containerd[1658]: time="2026-01-20T15:09:22.477627421Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 15:09:22.477956 containerd[1658]: time="2026-01-20T15:09:22.477811506Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 15:09:22.477956 containerd[1658]: time="2026-01-20T15:09:22.477838476Z" level=info msg="Start snapshots syncer" Jan 20 15:09:22.480430 containerd[1658]: time="2026-01-20T15:09:22.479950760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 15:09:22.480430 containerd[1658]: time="2026-01-20T15:09:22.480309891Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 15:09:22.480670 containerd[1658]: time="2026-01-20T15:09:22.480359023Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 15:09:22.482368 containerd[1658]: time="2026-01-20T15:09:22.482301360Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 15:09:22.482538 containerd[1658]: time="2026-01-20T15:09:22.482521230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 15:09:22.482831 containerd[1658]: time="2026-01-20T15:09:22.482813797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 15:09:22.482968 containerd[1658]: time="2026-01-20T15:09:22.482954109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 15:09:22.483176 containerd[1658]: time="2026-01-20T15:09:22.483082939Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 15:09:22.483176 containerd[1658]: time="2026-01-20T15:09:22.483098107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 15:09:22.483176 containerd[1658]: time="2026-01-20T15:09:22.483108276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 15:09:22.483176 containerd[1658]: time="2026-01-20T15:09:22.483118876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 15:09:22.483383 containerd[1658]: time="2026-01-20T15:09:22.483295686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 15:09:22.483383 containerd[1658]: time="2026-01-20T15:09:22.483323849Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 15:09:22.483719 containerd[1658]: time="2026-01-20T15:09:22.483641512Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484133120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484153759Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484165651Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484173736Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484184105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484193533Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484211296Z" level=info msg="runtime interface created" Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484216526Z" level=info msg="created NRI interface" Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484224160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484235561Z" level=info msg="Connect containerd service" Jan 20 15:09:22.484303 containerd[1658]: time="2026-01-20T15:09:22.484253825Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 15:09:22.488669 containerd[1658]: time="2026-01-20T15:09:22.488568723Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 15:09:22.511818 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 15:09:22.519941 tar[1656]: linux-amd64/README.md Jan 20 15:09:22.522725 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 15:09:22.541384 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 15:09:22.545956 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 15:09:22.546371 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 15:09:22.558243 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 15:09:22.595132 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 15:09:22.602443 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 15:09:22.609492 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 15:09:22.613779 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 15:09:22.669065 containerd[1658]: time="2026-01-20T15:09:22.668925865Z" level=info msg="Start subscribing containerd event" Jan 20 15:09:22.669065 containerd[1658]: time="2026-01-20T15:09:22.669002238Z" level=info msg="Start recovering state" Jan 20 15:09:22.670092 containerd[1658]: time="2026-01-20T15:09:22.669951938Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 15:09:22.670184 containerd[1658]: time="2026-01-20T15:09:22.670133948Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 15:09:22.670674 containerd[1658]: time="2026-01-20T15:09:22.670628825Z" level=info msg="Start event monitor" Jan 20 15:09:22.670725 containerd[1658]: time="2026-01-20T15:09:22.670676104Z" level=info msg="Start cni network conf syncer for default" Jan 20 15:09:22.670725 containerd[1658]: time="2026-01-20T15:09:22.670685922Z" level=info msg="Start streaming server" Jan 20 15:09:22.670725 containerd[1658]: time="2026-01-20T15:09:22.670696371Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 15:09:22.670725 containerd[1658]: time="2026-01-20T15:09:22.670709796Z" level=info msg="runtime interface starting up..." Jan 20 15:09:22.670725 containerd[1658]: time="2026-01-20T15:09:22.670719795Z" level=info msg="starting plugins..." Jan 20 15:09:22.670826 containerd[1658]: time="2026-01-20T15:09:22.670741325Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 15:09:22.673160 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 15:09:22.677648 containerd[1658]: time="2026-01-20T15:09:22.677149191Z" level=info msg="containerd successfully booted in 0.231075s" Jan 20 15:09:23.135166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:09:23.139512 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 15:09:23.143511 systemd[1]: Startup finished in 4.565s (kernel) + 9.493s (initrd) + 7.851s (userspace) = 21.911s. Jan 20 15:09:23.156370 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:09:23.661213 kubelet[1757]: E0120 15:09:23.661119 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:09:23.664519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:09:23.664760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:09:23.665372 systemd[1]: kubelet.service: Consumed 950ms CPU time, 264.8M memory peak. Jan 20 15:09:29.190488 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 15:09:29.192111 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:52456.service - OpenSSH per-connection server daemon (10.0.0.1:52456). Jan 20 15:09:29.657673 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 52456 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:09:29.660355 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:29.689071 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 15:09:29.691330 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 15:09:29.698778 systemd-logind[1631]: New session 1 of user core. Jan 20 15:09:29.725547 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 15:09:29.729546 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 15:09:29.763637 (systemd)[1776]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:29.767997 systemd-logind[1631]: New session 2 of user core. Jan 20 15:09:30.069598 systemd[1776]: Queued start job for default target default.target. Jan 20 15:09:30.086582 systemd[1776]: Created slice app.slice - User Application Slice. Jan 20 15:09:30.086645 systemd[1776]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 15:09:30.086660 systemd[1776]: Reached target paths.target - Paths. Jan 20 15:09:30.086745 systemd[1776]: Reached target timers.target - Timers. Jan 20 15:09:30.089177 systemd[1776]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 15:09:30.090373 systemd[1776]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 15:09:30.103781 systemd[1776]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 15:09:30.104078 systemd[1776]: Reached target sockets.target - Sockets. Jan 20 15:09:30.109574 systemd[1776]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 15:09:30.109736 systemd[1776]: Reached target basic.target - Basic System. Jan 20 15:09:30.109831 systemd[1776]: Reached target default.target - Main User Target. Jan 20 15:09:30.109950 systemd[1776]: Startup finished in 334ms. Jan 20 15:09:30.110203 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 15:09:30.124218 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 15:09:30.158582 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:52472.service - OpenSSH per-connection server daemon (10.0.0.1:52472). Jan 20 15:09:30.225304 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 52472 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:09:30.227455 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:30.239210 systemd-logind[1631]: New session 3 of user core. Jan 20 15:09:30.262773 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 15:09:30.311605 sshd[1794]: Connection closed by 10.0.0.1 port 52472 Jan 20 15:09:30.312661 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Jan 20 15:09:30.339398 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:52472.service: Deactivated successfully. Jan 20 15:09:30.342002 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 15:09:30.343071 systemd-logind[1631]: Session 3 logged out. Waiting for processes to exit. Jan 20 15:09:30.349734 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:52488.service - OpenSSH per-connection server daemon (10.0.0.1:52488). Jan 20 15:09:30.351066 systemd-logind[1631]: Removed session 3. Jan 20 15:09:30.455486 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 52488 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:09:30.460194 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:30.472087 systemd-logind[1631]: New session 4 of user core. Jan 20 15:09:30.487264 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 15:09:30.511595 sshd[1804]: Connection closed by 10.0.0.1 port 52488 Jan 20 15:09:30.511940 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 20 15:09:30.530314 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:52488.service: Deactivated successfully. Jan 20 15:09:30.534321 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 15:09:30.538535 systemd-logind[1631]: Session 4 logged out. Waiting for processes to exit. Jan 20 15:09:30.541277 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:52502.service - OpenSSH per-connection server daemon (10.0.0.1:52502). Jan 20 15:09:30.542965 systemd-logind[1631]: Removed session 4. Jan 20 15:09:30.698286 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 52502 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:09:30.700685 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:30.708355 systemd-logind[1631]: New session 5 of user core. Jan 20 15:09:30.724120 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 15:09:30.741948 sshd[1815]: Connection closed by 10.0.0.1 port 52502 Jan 20 15:09:30.742573 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 20 15:09:30.753626 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:52502.service: Deactivated successfully. Jan 20 15:09:30.755606 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 15:09:30.756797 systemd-logind[1631]: Session 5 logged out. Waiting for processes to exit. Jan 20 15:09:30.760146 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:52518.service - OpenSSH per-connection server daemon (10.0.0.1:52518). Jan 20 15:09:30.760821 systemd-logind[1631]: Removed session 5. Jan 20 15:09:30.818059 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 52518 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:09:30.819942 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:30.827114 systemd-logind[1631]: New session 6 of user core. Jan 20 15:09:30.837140 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 15:09:30.868254 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 15:09:30.868634 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 15:09:30.895600 sudo[1826]: pam_unix(sudo:session): session closed for user root Jan 20 15:09:30.897938 sshd[1825]: Connection closed by 10.0.0.1 port 52518 Jan 20 15:09:30.898528 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Jan 20 15:09:30.913300 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:52518.service: Deactivated successfully. Jan 20 15:09:30.915370 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 15:09:30.916705 systemd-logind[1631]: Session 6 logged out. Waiting for processes to exit. Jan 20 15:09:30.919648 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:52522.service - OpenSSH per-connection server daemon (10.0.0.1:52522). Jan 20 15:09:30.920570 systemd-logind[1631]: Removed session 6. Jan 20 15:09:31.071053 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 52522 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:09:31.073192 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:31.083672 systemd-logind[1631]: New session 7 of user core. Jan 20 15:09:31.104174 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 15:09:31.134738 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 15:09:31.135427 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 15:09:31.140004 sudo[1840]: pam_unix(sudo:session): session closed for user root Jan 20 15:09:31.154374 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 15:09:31.155003 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 15:09:31.171000 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 15:09:31.255000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 15:09:31.258146 augenrules[1864]: No rules Jan 20 15:09:31.260146 kernel: kauditd_printk_skb: 114 callbacks suppressed Jan 20 15:09:31.260218 kernel: audit: type=1305 audit(1768921771.255:237): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 15:09:31.260350 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 15:09:31.260795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 15:09:31.262940 sudo[1839]: pam_unix(sudo:session): session closed for user root Jan 20 15:09:31.267139 kernel: audit: type=1300 audit(1768921771.255:237): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3b7bac50 a2=420 a3=0 items=0 ppid=1845 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:31.255000 audit[1864]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3b7bac50 a2=420 a3=0 items=0 ppid=1845 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:31.267717 sshd[1838]: Connection closed by 10.0.0.1 port 52522 Jan 20 15:09:31.268292 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 20 15:09:31.255000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 15:09:31.284356 kernel: audit: type=1327 audit(1768921771.255:237): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 15:09:31.284513 kernel: audit: type=1130 audit(1768921771.259:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.292369 kernel: audit: type=1131 audit(1768921771.259:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.300430 kernel: audit: type=1106 audit(1768921771.261:240): pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.261000 audit[1839]: USER_END pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.309521 kernel: audit: type=1104 audit(1768921771.261:241): pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.261000 audit[1839]: CRED_DISP pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.269000 audit[1833]: USER_END pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.328952 kernel: audit: type=1106 audit(1768921771.269:242): pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.329126 kernel: audit: type=1104 audit(1768921771.269:243): pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.269000 audit[1833]: CRED_DISP pid=1833 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.365498 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:52522.service: Deactivated successfully. Jan 20 15:09:31.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.116:22-10.0.0.1:52522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.370189 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 15:09:31.371435 systemd-logind[1631]: Session 7 logged out. Waiting for processes to exit. Jan 20 15:09:31.377821 kernel: audit: type=1131 audit(1768921771.364:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.116:22-10.0.0.1:52522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.385468 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:52530.service - OpenSSH per-connection server daemon (10.0.0.1:52530). Jan 20 15:09:31.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:52530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.386606 systemd-logind[1631]: Removed session 7. Jan 20 15:09:31.506000 audit[1873]: USER_ACCT pid=1873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.510248 sshd[1873]: Accepted publickey for core from 10.0.0.1 port 52530 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:09:31.509000 audit[1873]: CRED_ACQ pid=1873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.509000 audit[1873]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8b7b2e10 a2=3 a3=0 items=0 ppid=1 pid=1873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:31.509000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:09:31.512210 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:09:31.521545 systemd-logind[1631]: New session 8 of user core. Jan 20 15:09:31.531566 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 15:09:31.537000 audit[1873]: USER_START pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.544000 audit[1877]: CRED_ACQ pid=1877 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:09:31.572000 audit[1878]: USER_ACCT pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.573000 audit[1878]: CRED_REFR pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.574973 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 15:09:31.573000 audit[1878]: USER_START pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:09:31.575598 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 15:09:33.776359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 15:09:33.779715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:09:34.176146 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1356943201 wd_nsec: 1356942805 Jan 20 15:09:35.157450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:09:35.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:35.184698 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:09:35.697287 kubelet[1906]: E0120 15:09:35.696137 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:09:35.701814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:09:35.702128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:09:35.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:09:35.702660 systemd[1]: kubelet.service: Consumed 1.948s CPU time, 109.7M memory peak. Jan 20 15:09:36.874474 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 15:09:36.899391 (dockerd)[1916]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 15:09:40.630364 dockerd[1916]: time="2026-01-20T15:09:40.629665467Z" level=info msg="Starting up" Jan 20 15:09:40.637440 dockerd[1916]: time="2026-01-20T15:09:40.637326320Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 15:09:40.689789 dockerd[1916]: time="2026-01-20T15:09:40.689646755Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 15:09:40.818810 dockerd[1916]: time="2026-01-20T15:09:40.818217856Z" level=info msg="Loading containers: start." Jan 20 15:09:40.847914 kernel: Initializing XFRM netlink socket Jan 20 15:09:40.964000 audit[1969]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:40.970437 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 20 15:09:40.971452 kernel: audit: type=1325 audit(1768921780.964:256): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:40.964000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe90db6a80 a2=0 a3=0 items=0 ppid=1916 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:40.993283 kernel: audit: type=1300 audit(1768921780.964:256): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe90db6a80 a2=0 a3=0 items=0 ppid=1916 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:40.994160 kernel: audit: type=1327 audit(1768921780.964:256): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 15:09:40.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 15:09:40.998323 kernel: audit: type=1325 audit(1768921780.975:257): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:40.975000 audit[1971]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.004384 kernel: audit: type=1300 audit(1768921780.975:257): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeac615bf0 a2=0 a3=0 items=0 ppid=1916 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:40.975000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeac615bf0 a2=0 a3=0 items=0 ppid=1916 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:40.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 15:09:41.023675 kernel: audit: type=1327 audit(1768921780.975:257): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 15:09:41.024282 kernel: audit: type=1325 audit(1768921780.996:258): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:40.996000 audit[1973]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.030718 kernel: audit: type=1300 audit(1768921780.996:258): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd20ceb640 a2=0 a3=0 items=0 ppid=1916 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:40.996000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd20ceb640 a2=0 a3=0 items=0 ppid=1916 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.044667 kernel: audit: type=1327 audit(1768921780.996:258): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 15:09:40.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 15:09:41.010000 audit[1975]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.063198 kernel: audit: type=1325 audit(1768921781.010:259): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.010000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff0573970 a2=0 a3=0 items=0 ppid=1916 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 15:09:41.020000 audit[1977]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.020000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe83efbd70 a2=0 a3=0 items=0 ppid=1916 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.020000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 15:09:41.033000 audit[1979]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.033000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea13dcb20 a2=0 a3=0 items=0 ppid=1916 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.033000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 15:09:41.055000 audit[1981]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.055000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff656bd650 a2=0 a3=0 items=0 ppid=1916 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 15:09:41.073000 audit[1983]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.073000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffee1b61820 a2=0 a3=0 items=0 ppid=1916 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.073000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 15:09:41.143000 audit[1987]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.143000 audit[1987]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffc5edc940 a2=0 a3=0 items=0 ppid=1916 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 15:09:41.158000 audit[1989]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.158000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffda71a6c10 a2=0 a3=0 items=0 ppid=1916 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 15:09:41.168000 audit[1991]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.168000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe6770caa0 a2=0 a3=0 items=0 ppid=1916 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.168000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 15:09:41.176000 audit[1993]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.176000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe1f43f870 a2=0 a3=0 items=0 ppid=1916 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 15:09:41.190000 audit[1995]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.190000 audit[1995]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe6517f7c0 a2=0 a3=0 items=0 ppid=1916 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 15:09:41.467000 audit[2025]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.467000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffec7050ee0 a2=0 a3=0 items=0 ppid=1916 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 15:09:41.472000 audit[2027]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.472000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffec2ea730 a2=0 a3=0 items=0 ppid=1916 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.472000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 15:09:41.477000 audit[2029]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.477000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc90aed0b0 a2=0 a3=0 items=0 ppid=1916 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.477000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 15:09:41.482000 audit[2031]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.482000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd12e2110 a2=0 a3=0 items=0 ppid=1916 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.482000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 15:09:41.486000 audit[2033]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.486000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc55466480 a2=0 a3=0 items=0 ppid=1916 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.486000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 15:09:41.490000 audit[2035]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.490000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1ed4c9e0 a2=0 a3=0 items=0 ppid=1916 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.490000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 15:09:41.494000 audit[2037]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.494000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd78811c50 a2=0 a3=0 items=0 ppid=1916 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 15:09:41.500000 audit[2039]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.500000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcd2fca070 a2=0 a3=0 items=0 ppid=1916 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.500000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 15:09:41.505000 audit[2041]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.505000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc2f5b43f0 a2=0 a3=0 items=0 ppid=1916 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.505000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 15:09:41.510000 audit[2043]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.510000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff8c143c30 a2=0 a3=0 items=0 ppid=1916 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.510000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 15:09:41.514000 audit[2045]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.514000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff033efd10 a2=0 a3=0 items=0 ppid=1916 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.514000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 15:09:41.519000 audit[2047]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.519000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd077fb440 a2=0 a3=0 items=0 ppid=1916 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.519000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 15:09:41.523000 audit[2049]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.523000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffee4d90ed0 a2=0 a3=0 items=0 ppid=1916 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.523000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 15:09:41.538000 audit[2054]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.538000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc99978fb0 a2=0 a3=0 items=0 ppid=1916 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.538000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 15:09:41.543000 audit[2056]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.543000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe811506a0 a2=0 a3=0 items=0 ppid=1916 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.543000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 15:09:41.548000 audit[2058]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.548000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcba8f19d0 a2=0 a3=0 items=0 ppid=1916 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.548000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 15:09:41.558000 audit[2060]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.558000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3fa37510 a2=0 a3=0 items=0 ppid=1916 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.558000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 15:09:41.563000 audit[2062]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.563000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff05fa6200 a2=0 a3=0 items=0 ppid=1916 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.563000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 15:09:41.567000 audit[2064]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:09:41.567000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd2b351740 a2=0 a3=0 items=0 ppid=1916 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.567000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 15:09:41.601000 audit[2068]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.601000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffde8a189e0 a2=0 a3=0 items=0 ppid=1916 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 15:09:41.608000 audit[2070]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.608000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff2e57fe40 a2=0 a3=0 items=0 ppid=1916 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.608000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 15:09:41.629000 audit[2078]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.629000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff0006db90 a2=0 a3=0 items=0 ppid=1916 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 15:09:41.650000 audit[2084]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.650000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffda0d0cf90 a2=0 a3=0 items=0 ppid=1916 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.650000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 15:09:41.656000 audit[2086]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.656000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd68da3f40 a2=0 a3=0 items=0 ppid=1916 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.656000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 15:09:41.661000 audit[2088]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.661000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcfc197e30 a2=0 a3=0 items=0 ppid=1916 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 15:09:41.665000 audit[2090]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.665000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff49686650 a2=0 a3=0 items=0 ppid=1916 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 15:09:41.670000 audit[2092]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:09:41.670000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc9db891c0 a2=0 a3=0 items=0 ppid=1916 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:09:41.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 15:09:41.672598 systemd-networkd[1319]: docker0: Link UP Jan 20 15:09:41.679448 dockerd[1916]: time="2026-01-20T15:09:41.679333528Z" level=info msg="Loading containers: done." Jan 20 15:09:41.750278 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4061159639-merged.mount: Deactivated successfully. Jan 20 15:09:41.758955 dockerd[1916]: time="2026-01-20T15:09:41.758201776Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 15:09:41.760571 dockerd[1916]: time="2026-01-20T15:09:41.760472623Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 15:09:41.761714 dockerd[1916]: time="2026-01-20T15:09:41.761642371Z" level=info msg="Initializing buildkit" Jan 20 15:09:41.903187 dockerd[1916]: time="2026-01-20T15:09:41.902577367Z" level=info msg="Completed buildkit initialization" Jan 20 15:09:41.923013 dockerd[1916]: time="2026-01-20T15:09:41.922827784Z" level=info msg="Daemon has completed initialization" Jan 20 15:09:41.924997 dockerd[1916]: time="2026-01-20T15:09:41.924331763Z" level=info msg="API listen on /run/docker.sock" Jan 20 15:09:41.927359 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 15:09:41.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:44.700584 containerd[1658]: time="2026-01-20T15:09:44.700355865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 15:09:45.768283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 15:09:45.770445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:09:46.780572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:09:46.793655 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 20 15:09:46.793743 kernel: audit: type=1130 audit(1768921786.779:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:46.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:46.812311 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:09:46.817512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637839780.mount: Deactivated successfully. Jan 20 15:09:46.922241 kubelet[2144]: E0120 15:09:46.922106 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:09:46.925161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:09:46.925365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:09:46.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:09:46.926356 systemd[1]: kubelet.service: Consumed 1.070s CPU time, 110.8M memory peak. Jan 20 15:09:46.935010 kernel: audit: type=1131 audit(1768921786.924:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:09:51.211165 containerd[1658]: time="2026-01-20T15:09:51.210937226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:51.213236 containerd[1658]: time="2026-01-20T15:09:51.211766898Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401970" Jan 20 15:09:51.214645 containerd[1658]: time="2026-01-20T15:09:51.214498608Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:51.221585 containerd[1658]: time="2026-01-20T15:09:51.221529262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:51.225434 containerd[1658]: time="2026-01-20T15:09:51.225340222Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 6.524902334s" Jan 20 15:09:51.225434 containerd[1658]: time="2026-01-20T15:09:51.225412837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 15:09:51.229379 containerd[1658]: time="2026-01-20T15:09:51.229185050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 15:09:54.680230 containerd[1658]: time="2026-01-20T15:09:54.679823109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:54.681497 containerd[1658]: time="2026-01-20T15:09:54.681456744Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 15:09:54.683157 containerd[1658]: time="2026-01-20T15:09:54.683012279Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:54.688874 containerd[1658]: time="2026-01-20T15:09:54.688664812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:54.690662 containerd[1658]: time="2026-01-20T15:09:54.690380508Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.461077277s" Jan 20 15:09:54.690662 containerd[1658]: time="2026-01-20T15:09:54.690557003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 15:09:54.692503 containerd[1658]: time="2026-01-20T15:09:54.692397659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 15:09:57.299251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 15:09:57.305082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:09:57.541283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:09:57.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:57.549959 kernel: audit: type=1130 audit(1768921797.540:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:09:57.576735 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:09:58.084480 kubelet[2226]: E0120 15:09:58.084346 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:09:58.088289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:09:58.088526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:09:58.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:09:58.089542 systemd[1]: kubelet.service: Consumed 765ms CPU time, 110.6M memory peak. Jan 20 15:09:58.097952 kernel: audit: type=1131 audit(1768921798.088:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:09:58.172738 containerd[1658]: time="2026-01-20T15:09:58.172623747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:58.174008 containerd[1658]: time="2026-01-20T15:09:58.173981804Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 15:09:58.176049 containerd[1658]: time="2026-01-20T15:09:58.175962418Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:58.180371 containerd[1658]: time="2026-01-20T15:09:58.180293557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:09:58.181830 containerd[1658]: time="2026-01-20T15:09:58.181752129Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.489246534s" Jan 20 15:09:58.181830 containerd[1658]: time="2026-01-20T15:09:58.181807343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 15:09:58.184237 containerd[1658]: time="2026-01-20T15:09:58.184011478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 15:10:00.027234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213553518.mount: Deactivated successfully. Jan 20 15:10:01.644267 containerd[1658]: time="2026-01-20T15:10:01.644010247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:01.645494 containerd[1658]: time="2026-01-20T15:10:01.645394084Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=19572392" Jan 20 15:10:01.647066 containerd[1658]: time="2026-01-20T15:10:01.646965248Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:01.652491 containerd[1658]: time="2026-01-20T15:10:01.652383851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:01.652931 containerd[1658]: time="2026-01-20T15:10:01.652759571Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.468686086s" Jan 20 15:10:01.652931 containerd[1658]: time="2026-01-20T15:10:01.652806339Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 15:10:01.654687 containerd[1658]: time="2026-01-20T15:10:01.654640683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 15:10:02.545475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062012585.mount: Deactivated successfully. Jan 20 15:10:04.323952 containerd[1658]: time="2026-01-20T15:10:04.323568071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:04.325596 containerd[1658]: time="2026-01-20T15:10:04.324926684Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17653907" Jan 20 15:10:04.326985 containerd[1658]: time="2026-01-20T15:10:04.326834349Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:04.331290 containerd[1658]: time="2026-01-20T15:10:04.331194030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:04.332256 containerd[1658]: time="2026-01-20T15:10:04.332196290Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.677529419s" Jan 20 15:10:04.332299 containerd[1658]: time="2026-01-20T15:10:04.332258608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 15:10:04.334033 containerd[1658]: time="2026-01-20T15:10:04.333995721Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 15:10:04.955335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088356011.mount: Deactivated successfully. Jan 20 15:10:04.964236 containerd[1658]: time="2026-01-20T15:10:04.964155490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 15:10:04.965252 containerd[1658]: time="2026-01-20T15:10:04.965160184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 15:10:04.966593 containerd[1658]: time="2026-01-20T15:10:04.966505775Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 15:10:04.969136 containerd[1658]: time="2026-01-20T15:10:04.969049504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 15:10:04.969595 containerd[1658]: time="2026-01-20T15:10:04.969514459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 635.301969ms" Jan 20 15:10:04.969595 containerd[1658]: time="2026-01-20T15:10:04.969565035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 15:10:04.971322 containerd[1658]: time="2026-01-20T15:10:04.971200744Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 15:10:05.742258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298949998.mount: Deactivated successfully. Jan 20 15:10:07.233703 update_engine[1636]: I20260120 15:10:07.233207 1636 update_attempter.cc:509] Updating boot flags... Jan 20 15:10:08.267501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 15:10:08.272457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:10:08.962818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:10:08.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:08.971923 kernel: audit: type=1130 audit(1768921808.963:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:08.983564 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 15:10:09.245511 kubelet[2374]: E0120 15:10:09.244272 2374 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 15:10:09.248772 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 15:10:09.249095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 15:10:09.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:10:09.250102 systemd[1]: kubelet.service: Consumed 779ms CPU time, 110.9M memory peak. Jan 20 15:10:09.265956 kernel: audit: type=1131 audit(1768921809.249:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:10:10.775662 containerd[1658]: time="2026-01-20T15:10:10.775288197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:10.777515 containerd[1658]: time="2026-01-20T15:10:10.776485163Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=46659100" Jan 20 15:10:10.778588 containerd[1658]: time="2026-01-20T15:10:10.778535608Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:10.783078 containerd[1658]: time="2026-01-20T15:10:10.782966593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:10.784495 containerd[1658]: time="2026-01-20T15:10:10.784353330Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.81301193s" Jan 20 15:10:10.784495 containerd[1658]: time="2026-01-20T15:10:10.784421807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 15:10:14.003051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:10:14.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:14.003334 systemd[1]: kubelet.service: Consumed 779ms CPU time, 110.9M memory peak. Jan 20 15:10:14.006166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:10:14.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:14.023081 kernel: audit: type=1130 audit(1768921814.002:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:14.023208 kernel: audit: type=1131 audit(1768921814.002:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:14.045219 systemd[1]: Reload requested from client PID 2415 ('systemctl') (unit session-8.scope)... Jan 20 15:10:14.045261 systemd[1]: Reloading... Jan 20 15:10:14.154925 zram_generator::config[2462]: No configuration found. Jan 20 15:10:14.470614 systemd[1]: Reloading finished in 424 ms. Jan 20 15:10:14.508000 audit: BPF prog-id=67 op=LOAD Jan 20 15:10:14.515087 kernel: audit: type=1334 audit(1768921814.508:305): prog-id=67 op=LOAD Jan 20 15:10:14.515170 kernel: audit: type=1334 audit(1768921814.508:306): prog-id=56 op=UNLOAD Jan 20 15:10:14.508000 audit: BPF prog-id=56 op=UNLOAD Jan 20 15:10:14.510000 audit: BPF prog-id=68 op=LOAD Jan 20 15:10:14.517976 kernel: audit: type=1334 audit(1768921814.510:307): prog-id=68 op=LOAD Jan 20 15:10:14.510000 audit: BPF prog-id=47 op=UNLOAD Jan 20 15:10:14.520966 kernel: audit: type=1334 audit(1768921814.510:308): prog-id=47 op=UNLOAD Jan 20 15:10:14.521028 kernel: audit: type=1334 audit(1768921814.510:309): prog-id=69 op=LOAD Jan 20 15:10:14.510000 audit: BPF prog-id=69 op=LOAD Jan 20 15:10:14.523432 kernel: audit: type=1334 audit(1768921814.510:310): prog-id=70 op=LOAD Jan 20 15:10:14.510000 audit: BPF prog-id=70 op=LOAD Jan 20 15:10:14.510000 audit: BPF prog-id=48 op=UNLOAD Jan 20 15:10:14.528601 kernel: audit: type=1334 audit(1768921814.510:311): prog-id=48 op=UNLOAD Jan 20 15:10:14.528643 kernel: audit: type=1334 audit(1768921814.510:312): prog-id=49 op=UNLOAD Jan 20 15:10:14.510000 audit: BPF prog-id=49 op=UNLOAD Jan 20 15:10:14.512000 audit: BPF prog-id=71 op=LOAD Jan 20 15:10:14.512000 audit: BPF prog-id=64 op=UNLOAD Jan 20 15:10:14.512000 audit: BPF prog-id=72 op=LOAD Jan 20 15:10:14.512000 audit: BPF prog-id=73 op=LOAD Jan 20 15:10:14.512000 audit: BPF prog-id=65 op=UNLOAD Jan 20 15:10:14.512000 audit: BPF prog-id=66 op=UNLOAD Jan 20 15:10:14.514000 audit: BPF prog-id=74 op=LOAD Jan 20 15:10:14.514000 audit: BPF prog-id=60 op=UNLOAD Jan 20 15:10:14.514000 audit: BPF prog-id=75 op=LOAD Jan 20 15:10:14.514000 audit: BPF prog-id=76 op=LOAD Jan 20 15:10:14.514000 audit: BPF prog-id=61 op=UNLOAD Jan 20 15:10:14.514000 audit: BPF prog-id=62 op=UNLOAD Jan 20 15:10:14.535000 audit: BPF prog-id=77 op=LOAD Jan 20 15:10:14.535000 audit: BPF prog-id=53 op=UNLOAD Jan 20 15:10:14.535000 audit: BPF prog-id=78 op=LOAD Jan 20 15:10:14.535000 audit: BPF prog-id=79 op=LOAD Jan 20 15:10:14.535000 audit: BPF prog-id=54 op=UNLOAD Jan 20 15:10:14.535000 audit: BPF prog-id=55 op=UNLOAD Jan 20 15:10:14.537000 audit: BPF prog-id=80 op=LOAD Jan 20 15:10:14.537000 audit: BPF prog-id=57 op=UNLOAD Jan 20 15:10:14.538000 audit: BPF prog-id=81 op=LOAD Jan 20 15:10:14.538000 audit: BPF prog-id=82 op=LOAD Jan 20 15:10:14.538000 audit: BPF prog-id=58 op=UNLOAD Jan 20 15:10:14.538000 audit: BPF prog-id=59 op=UNLOAD Jan 20 15:10:14.539000 audit: BPF prog-id=83 op=LOAD Jan 20 15:10:14.539000 audit: BPF prog-id=50 op=UNLOAD Jan 20 15:10:14.539000 audit: BPF prog-id=84 op=LOAD Jan 20 15:10:14.539000 audit: BPF prog-id=85 op=LOAD Jan 20 15:10:14.539000 audit: BPF prog-id=51 op=UNLOAD Jan 20 15:10:14.539000 audit: BPF prog-id=52 op=UNLOAD Jan 20 15:10:14.540000 audit: BPF prog-id=86 op=LOAD Jan 20 15:10:14.540000 audit: BPF prog-id=63 op=UNLOAD Jan 20 15:10:14.570746 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 15:10:14.571038 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 15:10:14.571640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:10:14.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 15:10:14.571763 systemd[1]: kubelet.service: Consumed 179ms CPU time, 98.5M memory peak. Jan 20 15:10:14.574743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:10:14.820957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:10:14.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:14.836335 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 15:10:14.897402 kubelet[2510]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:10:14.897402 kubelet[2510]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 15:10:14.897402 kubelet[2510]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:10:14.898062 kubelet[2510]: I0120 15:10:14.897400 2510 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 15:10:15.361268 kubelet[2510]: I0120 15:10:15.361159 2510 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 15:10:15.361268 kubelet[2510]: I0120 15:10:15.361239 2510 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 15:10:15.361709 kubelet[2510]: I0120 15:10:15.361633 2510 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 15:10:15.392330 kubelet[2510]: E0120 15:10:15.392205 2510 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:15.392715 kubelet[2510]: I0120 15:10:15.392624 2510 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 15:10:15.407649 kubelet[2510]: I0120 15:10:15.407378 2510 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 15:10:15.418290 kubelet[2510]: I0120 15:10:15.418188 2510 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 15:10:15.420384 kubelet[2510]: I0120 15:10:15.420252 2510 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 15:10:15.420622 kubelet[2510]: I0120 15:10:15.420324 2510 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 15:10:15.420622 kubelet[2510]: I0120 15:10:15.420596 2510 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 15:10:15.420622 kubelet[2510]: I0120 15:10:15.420606 2510 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 15:10:15.421046 kubelet[2510]: I0120 15:10:15.420753 2510 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:10:15.424559 kubelet[2510]: I0120 15:10:15.424277 2510 kubelet.go:446] "Attempting to sync node with API server" Jan 20 15:10:15.424559 kubelet[2510]: I0120 15:10:15.424346 2510 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 15:10:15.424559 kubelet[2510]: I0120 15:10:15.424379 2510 kubelet.go:352] "Adding apiserver pod source" Jan 20 15:10:15.424559 kubelet[2510]: I0120 15:10:15.424394 2510 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 15:10:15.428826 kubelet[2510]: I0120 15:10:15.428753 2510 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 15:10:15.429704 kubelet[2510]: W0120 15:10:15.429241 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 20 15:10:15.429704 kubelet[2510]: E0120 15:10:15.429311 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:15.429704 kubelet[2510]: I0120 15:10:15.429348 2510 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 15:10:15.429704 kubelet[2510]: W0120 15:10:15.429410 2510 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 15:10:15.429704 kubelet[2510]: W0120 15:10:15.429571 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 20 15:10:15.429704 kubelet[2510]: E0120 15:10:15.429650 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:15.432288 kubelet[2510]: I0120 15:10:15.432200 2510 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 15:10:15.432347 kubelet[2510]: I0120 15:10:15.432306 2510 server.go:1287] "Started kubelet" Jan 20 15:10:15.434239 kubelet[2510]: I0120 15:10:15.434178 2510 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 15:10:15.435106 kubelet[2510]: I0120 15:10:15.434353 2510 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 15:10:15.437789 kubelet[2510]: I0120 15:10:15.435796 2510 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 15:10:15.437789 kubelet[2510]: I0120 15:10:15.436097 2510 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 15:10:15.437789 kubelet[2510]: I0120 15:10:15.437342 2510 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 15:10:15.443186 kubelet[2510]: I0120 15:10:15.443169 2510 server.go:479] "Adding debug handlers to kubelet server" Jan 20 15:10:15.443000 audit[2523]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.443000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd3976efc0 a2=0 a3=0 items=0 ppid=2510 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.443000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 15:10:15.445451 kubelet[2510]: E0120 15:10:15.445372 2510 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 15:10:15.445590 kubelet[2510]: I0120 15:10:15.445567 2510 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 15:10:15.445964 kubelet[2510]: I0120 15:10:15.445802 2510 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 15:10:15.447415 kubelet[2510]: I0120 15:10:15.447324 2510 reconciler.go:26] "Reconciler: start to sync state" Jan 20 15:10:15.447703 kubelet[2510]: E0120 15:10:15.443244 2510 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c7904dc4756ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 15:10:15.432255231 +0000 UTC m=+0.589614391,LastTimestamp:2026-01-20 15:10:15.432255231 +0000 UTC m=+0.589614391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 15:10:15.447000 audit[2524]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.447000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed85bc360 a2=0 a3=0 items=0 ppid=2510 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.447000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 15:10:15.448594 kubelet[2510]: W0120 15:10:15.448374 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 20 15:10:15.450228 kubelet[2510]: E0120 15:10:15.448981 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:15.450228 kubelet[2510]: E0120 15:10:15.450130 2510 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Jan 20 15:10:15.450764 kubelet[2510]: E0120 15:10:15.450739 2510 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 15:10:15.452000 audit[2526]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.452000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff318132f0 a2=0 a3=0 items=0 ppid=2510 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 15:10:15.454277 kubelet[2510]: I0120 15:10:15.454255 2510 factory.go:221] Registration of the containerd container factory successfully Jan 20 15:10:15.454277 kubelet[2510]: I0120 15:10:15.454276 2510 factory.go:221] Registration of the systemd container factory successfully Jan 20 15:10:15.455259 kubelet[2510]: I0120 15:10:15.455228 2510 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 15:10:15.459000 audit[2528]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.459000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff80f2e790 a2=0 a3=0 items=0 ppid=2510 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 15:10:15.475000 audit[2533]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.475000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd9ef57910 a2=0 a3=0 items=0 ppid=2510 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.475000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 15:10:15.477572 kubelet[2510]: I0120 15:10:15.477321 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 15:10:15.478579 kubelet[2510]: I0120 15:10:15.478553 2510 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 15:10:15.478672 kubelet[2510]: I0120 15:10:15.478657 2510 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 15:10:15.478814 kubelet[2510]: I0120 15:10:15.478799 2510 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:10:15.478000 audit[2537]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:15.478000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd7d879c0 a2=0 a3=0 items=0 ppid=2510 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.478000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 15:10:15.480148 kubelet[2510]: I0120 15:10:15.480062 2510 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 15:10:15.480148 kubelet[2510]: I0120 15:10:15.480149 2510 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 15:10:15.480237 kubelet[2510]: I0120 15:10:15.480167 2510 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 15:10:15.480237 kubelet[2510]: I0120 15:10:15.480174 2510 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 15:10:15.480237 kubelet[2510]: E0120 15:10:15.480219 2510 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 15:10:15.480000 audit[2538]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.480000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc315f2230 a2=0 a3=0 items=0 ppid=2510 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 15:10:15.482000 audit[2539]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:15.482000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc253e3500 a2=0 a3=0 items=0 ppid=2510 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 15:10:15.483000 audit[2540]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.483000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb60e9010 a2=0 a3=0 items=0 ppid=2510 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 15:10:15.484000 audit[2543]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:15.484000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaf81ba00 a2=0 a3=0 items=0 ppid=2510 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 15:10:15.486000 audit[2544]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:15.486000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe08a4a3f0 a2=0 a3=0 items=0 ppid=2510 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 15:10:15.487000 audit[2545]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:15.487000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec745e960 a2=0 a3=0 items=0 ppid=2510 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:15.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 15:10:15.546183 kubelet[2510]: E0120 15:10:15.545708 2510 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 15:10:15.581627 kubelet[2510]: E0120 15:10:15.581399 2510 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 15:10:15.582610 kubelet[2510]: I0120 15:10:15.582489 2510 policy_none.go:49] "None policy: Start" Jan 20 15:10:15.582610 kubelet[2510]: I0120 15:10:15.582604 2510 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 15:10:15.583001 kubelet[2510]: I0120 15:10:15.582628 2510 state_mem.go:35] "Initializing new in-memory state store" Jan 20 15:10:15.583001 kubelet[2510]: W0120 15:10:15.582655 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 20 15:10:15.583001 kubelet[2510]: E0120 15:10:15.582727 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:15.597377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 15:10:15.625008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 15:10:15.635483 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 15:10:15.647837 kubelet[2510]: E0120 15:10:15.647792 2510 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 15:10:15.651353 kubelet[2510]: E0120 15:10:15.651199 2510 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Jan 20 15:10:15.656032 kubelet[2510]: I0120 15:10:15.655944 2510 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 15:10:15.656361 kubelet[2510]: I0120 15:10:15.656284 2510 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 15:10:15.656423 kubelet[2510]: I0120 15:10:15.656338 2510 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 15:10:15.657393 kubelet[2510]: I0120 15:10:15.657264 2510 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 15:10:15.659297 kubelet[2510]: E0120 15:10:15.659159 2510 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 15:10:15.659297 kubelet[2510]: E0120 15:10:15.659257 2510 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 15:10:15.758805 kubelet[2510]: I0120 15:10:15.758701 2510 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:10:15.759474 kubelet[2510]: E0120 15:10:15.759246 2510 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jan 20 15:10:15.797199 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 20 15:10:15.820552 kubelet[2510]: E0120 15:10:15.820411 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:15.825394 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 20 15:10:15.836190 kubelet[2510]: E0120 15:10:15.836105 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:15.840761 systemd[1]: Created slice kubepods-burstable-podaba91f0f1e98240a43c4a72aaa9767ee.slice - libcontainer container kubepods-burstable-podaba91f0f1e98240a43c4a72aaa9767ee.slice. Jan 20 15:10:15.843791 kubelet[2510]: E0120 15:10:15.843692 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:15.849478 kubelet[2510]: I0120 15:10:15.849392 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:15.849684 kubelet[2510]: I0120 15:10:15.849547 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:15.849684 kubelet[2510]: I0120 15:10:15.849579 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:15.849684 kubelet[2510]: I0120 15:10:15.849600 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 15:10:15.849684 kubelet[2510]: I0120 15:10:15.849625 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aba91f0f1e98240a43c4a72aaa9767ee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba91f0f1e98240a43c4a72aaa9767ee\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:15.849684 kubelet[2510]: I0120 15:10:15.849646 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aba91f0f1e98240a43c4a72aaa9767ee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba91f0f1e98240a43c4a72aaa9767ee\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:15.850000 kubelet[2510]: I0120 15:10:15.849709 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aba91f0f1e98240a43c4a72aaa9767ee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aba91f0f1e98240a43c4a72aaa9767ee\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:15.850000 kubelet[2510]: I0120 15:10:15.849730 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:15.850000 kubelet[2510]: I0120 15:10:15.849750 2510 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:15.963104 kubelet[2510]: I0120 15:10:15.962370 2510 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:10:15.963104 kubelet[2510]: E0120 15:10:15.963033 2510 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jan 20 15:10:16.052232 kubelet[2510]: E0120 15:10:16.051985 2510 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Jan 20 15:10:16.122166 kubelet[2510]: E0120 15:10:16.121956 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:16.123635 containerd[1658]: time="2026-01-20T15:10:16.123463831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 20 15:10:16.137122 kubelet[2510]: E0120 15:10:16.137076 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:16.137716 containerd[1658]: time="2026-01-20T15:10:16.137643659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 20 15:10:16.145577 kubelet[2510]: E0120 15:10:16.145477 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:16.146295 containerd[1658]: time="2026-01-20T15:10:16.146242993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aba91f0f1e98240a43c4a72aaa9767ee,Namespace:kube-system,Attempt:0,}" Jan 20 15:10:16.200369 containerd[1658]: time="2026-01-20T15:10:16.198418234Z" level=info msg="connecting to shim c75a889e2d90af3a5e44132a953d5b8464b94e4ff2cfd63d8e816a8649fc896c" address="unix:///run/containerd/s/b443fe473d146bc171747639b70b28c58c533509225a76564c1c14620c6b9982" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:10:16.202268 containerd[1658]: time="2026-01-20T15:10:16.202226832Z" level=info msg="connecting to shim 7f6d5d19cb794003e4504d5b894a68624d8ed253620c1fb6914199e99652d28d" address="unix:///run/containerd/s/250b9f48b6e345e7b8f8612468aa9a5901b70187882415cc3e706ce0ea0347fe" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:10:16.208634 containerd[1658]: time="2026-01-20T15:10:16.208443400Z" level=info msg="connecting to shim 1e8b5462599e6c39560e0843187a56cde75199a70e5932c512c6c50ae61e69f8" address="unix:///run/containerd/s/3125c84dd3bd1ec514e283073498ed49be566a107ecf910f0f51c7bebdeb33f6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:10:16.263164 systemd[1]: Started cri-containerd-1e8b5462599e6c39560e0843187a56cde75199a70e5932c512c6c50ae61e69f8.scope - libcontainer container 1e8b5462599e6c39560e0843187a56cde75199a70e5932c512c6c50ae61e69f8. Jan 20 15:10:16.270284 systemd[1]: Started cri-containerd-7f6d5d19cb794003e4504d5b894a68624d8ed253620c1fb6914199e99652d28d.scope - libcontainer container 7f6d5d19cb794003e4504d5b894a68624d8ed253620c1fb6914199e99652d28d. Jan 20 15:10:16.276037 systemd[1]: Started cri-containerd-c75a889e2d90af3a5e44132a953d5b8464b94e4ff2cfd63d8e816a8649fc896c.scope - libcontainer container c75a889e2d90af3a5e44132a953d5b8464b94e4ff2cfd63d8e816a8649fc896c. Jan 20 15:10:16.290000 audit: BPF prog-id=87 op=LOAD Jan 20 15:10:16.291000 audit: BPF prog-id=88 op=LOAD Jan 20 15:10:16.291000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2579 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165386235343632353939653663333935363065303834333138376135 Jan 20 15:10:16.291000 audit: BPF prog-id=88 op=UNLOAD Jan 20 15:10:16.291000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165386235343632353939653663333935363065303834333138376135 Jan 20 15:10:16.291000 audit: BPF prog-id=89 op=LOAD Jan 20 15:10:16.291000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2579 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165386235343632353939653663333935363065303834333138376135 Jan 20 15:10:16.292000 audit: BPF prog-id=90 op=LOAD Jan 20 15:10:16.292000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2579 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165386235343632353939653663333935363065303834333138376135 Jan 20 15:10:16.292000 audit: BPF prog-id=90 op=UNLOAD Jan 20 15:10:16.292000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165386235343632353939653663333935363065303834333138376135 Jan 20 15:10:16.292000 audit: BPF prog-id=89 op=UNLOAD Jan 20 15:10:16.292000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165386235343632353939653663333935363065303834333138376135 Jan 20 15:10:16.292000 audit: BPF prog-id=91 op=LOAD Jan 20 15:10:16.292000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2579 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165386235343632353939653663333935363065303834333138376135 Jan 20 15:10:16.298000 audit: BPF prog-id=92 op=LOAD Jan 20 15:10:16.299000 audit: BPF prog-id=93 op=LOAD Jan 20 15:10:16.299000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2564 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766366435643139636237393430303365343530346435623839346136 Jan 20 15:10:16.299000 audit: BPF prog-id=93 op=UNLOAD Jan 20 15:10:16.299000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2564 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766366435643139636237393430303365343530346435623839346136 Jan 20 15:10:16.299000 audit: BPF prog-id=94 op=LOAD Jan 20 15:10:16.299000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2564 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.299000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766366435643139636237393430303365343530346435623839346136 Jan 20 15:10:16.300000 audit: BPF prog-id=95 op=LOAD Jan 20 15:10:16.300000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2564 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766366435643139636237393430303365343530346435623839346136 Jan 20 15:10:16.300000 audit: BPF prog-id=95 op=UNLOAD Jan 20 15:10:16.300000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2564 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766366435643139636237393430303365343530346435623839346136 Jan 20 15:10:16.300000 audit: BPF prog-id=94 op=UNLOAD Jan 20 15:10:16.300000 audit[2603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2564 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766366435643139636237393430303365343530346435623839346136 Jan 20 15:10:16.300000 audit: BPF prog-id=96 op=LOAD Jan 20 15:10:16.300000 audit[2603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2564 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766366435643139636237393430303365343530346435623839346136 Jan 20 15:10:16.315000 audit: BPF prog-id=97 op=LOAD Jan 20 15:10:16.316000 audit: BPF prog-id=98 op=LOAD Jan 20 15:10:16.316000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2561 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337356138383965326439306166336135653434313332613935336435 Jan 20 15:10:16.316000 audit: BPF prog-id=98 op=UNLOAD Jan 20 15:10:16.316000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337356138383965326439306166336135653434313332613935336435 Jan 20 15:10:16.316000 audit: BPF prog-id=99 op=LOAD Jan 20 15:10:16.316000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2561 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337356138383965326439306166336135653434313332613935336435 Jan 20 15:10:16.317000 audit: BPF prog-id=100 op=LOAD Jan 20 15:10:16.317000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2561 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337356138383965326439306166336135653434313332613935336435 Jan 20 15:10:16.317000 audit: BPF prog-id=100 op=UNLOAD Jan 20 15:10:16.317000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337356138383965326439306166336135653434313332613935336435 Jan 20 15:10:16.317000 audit: BPF prog-id=99 op=UNLOAD Jan 20 15:10:16.317000 audit[2604]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337356138383965326439306166336135653434313332613935336435 Jan 20 15:10:16.318000 audit: BPF prog-id=101 op=LOAD Jan 20 15:10:16.318000 audit[2604]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2561 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337356138383965326439306166336135653434313332613935336435 Jan 20 15:10:16.343315 kubelet[2510]: W0120 15:10:16.343156 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 20 15:10:16.343315 kubelet[2510]: E0120 15:10:16.343292 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:16.366316 kubelet[2510]: I0120 15:10:16.366230 2510 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:10:16.367223 containerd[1658]: time="2026-01-20T15:10:16.367077790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aba91f0f1e98240a43c4a72aaa9767ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e8b5462599e6c39560e0843187a56cde75199a70e5932c512c6c50ae61e69f8\"" Jan 20 15:10:16.369709 kubelet[2510]: E0120 15:10:16.369232 2510 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Jan 20 15:10:16.371029 kubelet[2510]: E0120 15:10:16.370809 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:16.371954 containerd[1658]: time="2026-01-20T15:10:16.371802871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6d5d19cb794003e4504d5b894a68624d8ed253620c1fb6914199e99652d28d\"" Jan 20 15:10:16.375825 kubelet[2510]: E0120 15:10:16.375637 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:16.376262 containerd[1658]: time="2026-01-20T15:10:16.376214584Z" level=info msg="CreateContainer within sandbox \"1e8b5462599e6c39560e0843187a56cde75199a70e5932c512c6c50ae61e69f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 15:10:16.382012 containerd[1658]: time="2026-01-20T15:10:16.381624521Z" level=info msg="CreateContainer within sandbox \"7f6d5d19cb794003e4504d5b894a68624d8ed253620c1fb6914199e99652d28d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 15:10:16.402262 containerd[1658]: time="2026-01-20T15:10:16.402119078Z" level=info msg="Container 1631081b99442b65c083f2eafb2c3ca0ef0cf625f55488ae90c15f6ea48c4154: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:16.407826 containerd[1658]: time="2026-01-20T15:10:16.407770700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c75a889e2d90af3a5e44132a953d5b8464b94e4ff2cfd63d8e816a8649fc896c\"" Jan 20 15:10:16.410187 kubelet[2510]: E0120 15:10:16.410078 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:16.414139 containerd[1658]: time="2026-01-20T15:10:16.414087407Z" level=info msg="CreateContainer within sandbox \"c75a889e2d90af3a5e44132a953d5b8464b94e4ff2cfd63d8e816a8649fc896c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 15:10:16.418193 containerd[1658]: time="2026-01-20T15:10:16.418143613Z" level=info msg="CreateContainer within sandbox \"1e8b5462599e6c39560e0843187a56cde75199a70e5932c512c6c50ae61e69f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1631081b99442b65c083f2eafb2c3ca0ef0cf625f55488ae90c15f6ea48c4154\"" Jan 20 15:10:16.419391 containerd[1658]: time="2026-01-20T15:10:16.419269564Z" level=info msg="StartContainer for \"1631081b99442b65c083f2eafb2c3ca0ef0cf625f55488ae90c15f6ea48c4154\"" Jan 20 15:10:16.421077 containerd[1658]: time="2026-01-20T15:10:16.420961517Z" level=info msg="connecting to shim 1631081b99442b65c083f2eafb2c3ca0ef0cf625f55488ae90c15f6ea48c4154" address="unix:///run/containerd/s/3125c84dd3bd1ec514e283073498ed49be566a107ecf910f0f51c7bebdeb33f6" protocol=ttrpc version=3 Jan 20 15:10:16.425768 containerd[1658]: time="2026-01-20T15:10:16.425340768Z" level=info msg="Container 002b8f1b39172ea064eb638cd68c00c79885d93e14ad88c8367f4e90ef35ea77: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:16.439203 containerd[1658]: time="2026-01-20T15:10:16.439108120Z" level=info msg="Container 5438134efafb9ec2d38f04c7e166d010db9887a0eb8267ba471a9df7ee65b76a: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:16.443310 containerd[1658]: time="2026-01-20T15:10:16.443101131Z" level=info msg="CreateContainer within sandbox \"7f6d5d19cb794003e4504d5b894a68624d8ed253620c1fb6914199e99652d28d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"002b8f1b39172ea064eb638cd68c00c79885d93e14ad88c8367f4e90ef35ea77\"" Jan 20 15:10:16.444266 containerd[1658]: time="2026-01-20T15:10:16.444242522Z" level=info msg="StartContainer for \"002b8f1b39172ea064eb638cd68c00c79885d93e14ad88c8367f4e90ef35ea77\"" Jan 20 15:10:16.447304 containerd[1658]: time="2026-01-20T15:10:16.447235602Z" level=info msg="connecting to shim 002b8f1b39172ea064eb638cd68c00c79885d93e14ad88c8367f4e90ef35ea77" address="unix:///run/containerd/s/250b9f48b6e345e7b8f8612468aa9a5901b70187882415cc3e706ce0ea0347fe" protocol=ttrpc version=3 Jan 20 15:10:16.454292 systemd[1]: Started cri-containerd-1631081b99442b65c083f2eafb2c3ca0ef0cf625f55488ae90c15f6ea48c4154.scope - libcontainer container 1631081b99442b65c083f2eafb2c3ca0ef0cf625f55488ae90c15f6ea48c4154. Jan 20 15:10:16.460268 containerd[1658]: time="2026-01-20T15:10:16.460152487Z" level=info msg="CreateContainer within sandbox \"c75a889e2d90af3a5e44132a953d5b8464b94e4ff2cfd63d8e816a8649fc896c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5438134efafb9ec2d38f04c7e166d010db9887a0eb8267ba471a9df7ee65b76a\"" Jan 20 15:10:16.461940 containerd[1658]: time="2026-01-20T15:10:16.461577540Z" level=info msg="StartContainer for \"5438134efafb9ec2d38f04c7e166d010db9887a0eb8267ba471a9df7ee65b76a\"" Jan 20 15:10:16.463209 containerd[1658]: time="2026-01-20T15:10:16.463099784Z" level=info msg="connecting to shim 5438134efafb9ec2d38f04c7e166d010db9887a0eb8267ba471a9df7ee65b76a" address="unix:///run/containerd/s/b443fe473d146bc171747639b70b28c58c533509225a76564c1c14620c6b9982" protocol=ttrpc version=3 Jan 20 15:10:16.483000 audit: BPF prog-id=102 op=LOAD Jan 20 15:10:16.484000 audit: BPF prog-id=103 op=LOAD Jan 20 15:10:16.484000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2579 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333130383162393934343262363563303833663265616662326333 Jan 20 15:10:16.484000 audit: BPF prog-id=103 op=UNLOAD Jan 20 15:10:16.484000 audit[2684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333130383162393934343262363563303833663265616662326333 Jan 20 15:10:16.484000 audit: BPF prog-id=104 op=LOAD Jan 20 15:10:16.484000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2579 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333130383162393934343262363563303833663265616662326333 Jan 20 15:10:16.485000 audit: BPF prog-id=105 op=LOAD Jan 20 15:10:16.485000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2579 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333130383162393934343262363563303833663265616662326333 Jan 20 15:10:16.485000 audit: BPF prog-id=105 op=UNLOAD Jan 20 15:10:16.485000 audit[2684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333130383162393934343262363563303833663265616662326333 Jan 20 15:10:16.485000 audit: BPF prog-id=104 op=UNLOAD Jan 20 15:10:16.485000 audit[2684]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2579 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333130383162393934343262363563303833663265616662326333 Jan 20 15:10:16.485000 audit: BPF prog-id=106 op=LOAD Jan 20 15:10:16.485000 audit[2684]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2579 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136333130383162393934343262363563303833663265616662326333 Jan 20 15:10:16.499783 systemd[1]: Started cri-containerd-002b8f1b39172ea064eb638cd68c00c79885d93e14ad88c8367f4e90ef35ea77.scope - libcontainer container 002b8f1b39172ea064eb638cd68c00c79885d93e14ad88c8367f4e90ef35ea77. Jan 20 15:10:16.517308 systemd[1]: Started cri-containerd-5438134efafb9ec2d38f04c7e166d010db9887a0eb8267ba471a9df7ee65b76a.scope - libcontainer container 5438134efafb9ec2d38f04c7e166d010db9887a0eb8267ba471a9df7ee65b76a. Jan 20 15:10:16.537000 audit: BPF prog-id=107 op=LOAD Jan 20 15:10:16.538000 audit: BPF prog-id=108 op=LOAD Jan 20 15:10:16.538000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2561 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333831333465666166623965633264333866303463376531363664 Jan 20 15:10:16.538000 audit: BPF prog-id=108 op=UNLOAD Jan 20 15:10:16.538000 audit[2711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333831333465666166623965633264333866303463376531363664 Jan 20 15:10:16.538000 audit: BPF prog-id=109 op=LOAD Jan 20 15:10:16.538000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2561 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333831333465666166623965633264333866303463376531363664 Jan 20 15:10:16.538000 audit: BPF prog-id=110 op=LOAD Jan 20 15:10:16.538000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2561 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333831333465666166623965633264333866303463376531363664 Jan 20 15:10:16.538000 audit: BPF prog-id=110 op=UNLOAD Jan 20 15:10:16.538000 audit[2711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333831333465666166623965633264333866303463376531363664 Jan 20 15:10:16.538000 audit: BPF prog-id=109 op=UNLOAD Jan 20 15:10:16.538000 audit[2711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2561 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333831333465666166623965633264333866303463376531363664 Jan 20 15:10:16.539000 audit: BPF prog-id=111 op=LOAD Jan 20 15:10:16.539000 audit[2711]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2561 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534333831333465666166623965633264333866303463376531363664 Jan 20 15:10:16.549000 audit: BPF prog-id=112 op=LOAD Jan 20 15:10:16.551000 audit: BPF prog-id=113 op=LOAD Jan 20 15:10:16.551000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2564 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326238663162333931373265613036346562363338636436386330 Jan 20 15:10:16.551000 audit: BPF prog-id=113 op=UNLOAD Jan 20 15:10:16.551000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2564 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326238663162333931373265613036346562363338636436386330 Jan 20 15:10:16.551000 audit: BPF prog-id=114 op=LOAD Jan 20 15:10:16.551000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2564 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326238663162333931373265613036346562363338636436386330 Jan 20 15:10:16.551000 audit: BPF prog-id=115 op=LOAD Jan 20 15:10:16.551000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2564 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326238663162333931373265613036346562363338636436386330 Jan 20 15:10:16.551000 audit: BPF prog-id=115 op=UNLOAD Jan 20 15:10:16.551000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2564 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326238663162333931373265613036346562363338636436386330 Jan 20 15:10:16.551000 audit: BPF prog-id=114 op=UNLOAD Jan 20 15:10:16.551000 audit[2697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2564 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326238663162333931373265613036346562363338636436386330 Jan 20 15:10:16.551000 audit: BPF prog-id=116 op=LOAD Jan 20 15:10:16.551000 audit[2697]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2564 pid=2697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:16.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030326238663162333931373265613036346562363338636436386330 Jan 20 15:10:16.579988 containerd[1658]: time="2026-01-20T15:10:16.579648115Z" level=info msg="StartContainer for \"1631081b99442b65c083f2eafb2c3ca0ef0cf625f55488ae90c15f6ea48c4154\" returns successfully" Jan 20 15:10:16.603356 containerd[1658]: time="2026-01-20T15:10:16.603187516Z" level=info msg="StartContainer for \"5438134efafb9ec2d38f04c7e166d010db9887a0eb8267ba471a9df7ee65b76a\" returns successfully" Jan 20 15:10:16.635125 containerd[1658]: time="2026-01-20T15:10:16.634792066Z" level=info msg="StartContainer for \"002b8f1b39172ea064eb638cd68c00c79885d93e14ad88c8367f4e90ef35ea77\" returns successfully" Jan 20 15:10:16.652169 kubelet[2510]: W0120 15:10:16.652081 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 20 15:10:16.652922 kubelet[2510]: E0120 15:10:16.652237 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:16.738094 kubelet[2510]: W0120 15:10:16.738003 2510 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Jan 20 15:10:16.738094 kubelet[2510]: E0120 15:10:16.738066 2510 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Jan 20 15:10:17.189652 kubelet[2510]: I0120 15:10:17.189316 2510 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:10:17.520154 kubelet[2510]: E0120 15:10:17.519909 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:17.521392 kubelet[2510]: E0120 15:10:17.520957 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:17.526917 kubelet[2510]: E0120 15:10:17.526281 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:17.527259 kubelet[2510]: E0120 15:10:17.527237 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:17.529141 kubelet[2510]: E0120 15:10:17.529121 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:17.530279 kubelet[2510]: E0120 15:10:17.530142 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:18.603958 kubelet[2510]: E0120 15:10:18.603573 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:18.603958 kubelet[2510]: E0120 15:10:18.603793 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:18.603958 kubelet[2510]: E0120 15:10:18.604038 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:18.607629 kubelet[2510]: E0120 15:10:18.604316 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:18.607629 kubelet[2510]: E0120 15:10:18.604467 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:18.607629 kubelet[2510]: E0120 15:10:18.604632 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:19.708329 kubelet[2510]: E0120 15:10:19.708057 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:19.710302 kubelet[2510]: E0120 15:10:19.709063 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:19.710302 kubelet[2510]: E0120 15:10:19.709178 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:19.710302 kubelet[2510]: E0120 15:10:19.709513 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:20.814666 kubelet[2510]: E0120 15:10:20.814451 2510 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 15:10:20.814666 kubelet[2510]: E0120 15:10:20.814631 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:21.770905 kubelet[2510]: E0120 15:10:21.770082 2510 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 15:10:21.917950 kubelet[2510]: E0120 15:10:21.917113 2510 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c7904dc4756ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 15:10:15.432255231 +0000 UTC m=+0.589614391,LastTimestamp:2026-01-20 15:10:15.432255231 +0000 UTC m=+0.589614391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 15:10:21.948177 kubelet[2510]: I0120 15:10:21.948037 2510 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 15:10:21.949722 kubelet[2510]: I0120 15:10:21.949175 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:21.983352 kubelet[2510]: E0120 15:10:21.983219 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:21.983656 kubelet[2510]: I0120 15:10:21.983456 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 15:10:21.985393 kubelet[2510]: E0120 15:10:21.985332 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 15:10:21.985393 kubelet[2510]: I0120 15:10:21.985348 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:21.987929 kubelet[2510]: E0120 15:10:21.987754 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:22.134772 kubelet[2510]: I0120 15:10:22.133975 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:22.138945 kubelet[2510]: E0120 15:10:22.138036 2510 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:22.138945 kubelet[2510]: E0120 15:10:22.138232 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:22.712691 kubelet[2510]: I0120 15:10:22.712126 2510 apiserver.go:52] "Watching apiserver" Jan 20 15:10:22.746195 kubelet[2510]: I0120 15:10:22.746038 2510 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 15:10:25.356835 systemd[1]: Reload requested from client PID 2787 ('systemctl') (unit session-8.scope)... Jan 20 15:10:25.357000 systemd[1]: Reloading... Jan 20 15:10:25.529940 zram_generator::config[2833]: No configuration found. Jan 20 15:10:25.963674 kubelet[2510]: I0120 15:10:25.963350 2510 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:25.989435 kubelet[2510]: E0120 15:10:25.988368 2510 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:26.063314 systemd[1]: Reloading finished in 705 ms. Jan 20 15:10:26.111966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:10:26.134775 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 15:10:26.135636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:10:26.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:26.135806 systemd[1]: kubelet.service: Consumed 1.992s CPU time, 131M memory peak. Jan 20 15:10:26.146980 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 15:10:26.147232 kernel: audit: type=1131 audit(1768921826.134:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:26.147676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 15:10:26.148000 audit: BPF prog-id=117 op=LOAD Jan 20 15:10:26.161938 kernel: audit: type=1334 audit(1768921826.148:408): prog-id=117 op=LOAD Jan 20 15:10:26.148000 audit: BPF prog-id=80 op=UNLOAD Jan 20 15:10:26.160000 audit: BPF prog-id=118 op=LOAD Jan 20 15:10:26.170696 kernel: audit: type=1334 audit(1768921826.148:409): prog-id=80 op=UNLOAD Jan 20 15:10:26.170745 kernel: audit: type=1334 audit(1768921826.160:410): prog-id=118 op=LOAD Jan 20 15:10:26.170770 kernel: audit: type=1334 audit(1768921826.160:411): prog-id=71 op=UNLOAD Jan 20 15:10:26.170791 kernel: audit: type=1334 audit(1768921826.161:412): prog-id=119 op=LOAD Jan 20 15:10:26.170810 kernel: audit: type=1334 audit(1768921826.161:413): prog-id=120 op=LOAD Jan 20 15:10:26.170831 kernel: audit: type=1334 audit(1768921826.161:414): prog-id=72 op=UNLOAD Jan 20 15:10:26.170923 kernel: audit: type=1334 audit(1768921826.161:415): prog-id=73 op=UNLOAD Jan 20 15:10:26.160000 audit: BPF prog-id=71 op=UNLOAD Jan 20 15:10:26.161000 audit: BPF prog-id=119 op=LOAD Jan 20 15:10:26.161000 audit: BPF prog-id=120 op=LOAD Jan 20 15:10:26.161000 audit: BPF prog-id=72 op=UNLOAD Jan 20 15:10:26.161000 audit: BPF prog-id=73 op=UNLOAD Jan 20 15:10:26.162000 audit: BPF prog-id=121 op=LOAD Jan 20 15:10:26.188517 kernel: audit: type=1334 audit(1768921826.162:416): prog-id=121 op=LOAD Jan 20 15:10:26.162000 audit: BPF prog-id=74 op=UNLOAD Jan 20 15:10:26.162000 audit: BPF prog-id=122 op=LOAD Jan 20 15:10:26.162000 audit: BPF prog-id=123 op=LOAD Jan 20 15:10:26.162000 audit: BPF prog-id=75 op=UNLOAD Jan 20 15:10:26.162000 audit: BPF prog-id=76 op=UNLOAD Jan 20 15:10:26.165000 audit: BPF prog-id=124 op=LOAD Jan 20 15:10:26.165000 audit: BPF prog-id=67 op=UNLOAD Jan 20 15:10:26.166000 audit: BPF prog-id=125 op=LOAD Jan 20 15:10:26.166000 audit: BPF prog-id=86 op=UNLOAD Jan 20 15:10:26.166000 audit: BPF prog-id=126 op=LOAD Jan 20 15:10:26.166000 audit: BPF prog-id=127 op=LOAD Jan 20 15:10:26.166000 audit: BPF prog-id=81 op=UNLOAD Jan 20 15:10:26.166000 audit: BPF prog-id=82 op=UNLOAD Jan 20 15:10:26.168000 audit: BPF prog-id=128 op=LOAD Jan 20 15:10:26.168000 audit: BPF prog-id=68 op=UNLOAD Jan 20 15:10:26.168000 audit: BPF prog-id=129 op=LOAD Jan 20 15:10:26.168000 audit: BPF prog-id=130 op=LOAD Jan 20 15:10:26.168000 audit: BPF prog-id=69 op=UNLOAD Jan 20 15:10:26.168000 audit: BPF prog-id=70 op=UNLOAD Jan 20 15:10:26.169000 audit: BPF prog-id=131 op=LOAD Jan 20 15:10:26.169000 audit: BPF prog-id=77 op=UNLOAD Jan 20 15:10:26.169000 audit: BPF prog-id=132 op=LOAD Jan 20 15:10:26.169000 audit: BPF prog-id=133 op=LOAD Jan 20 15:10:26.169000 audit: BPF prog-id=78 op=UNLOAD Jan 20 15:10:26.169000 audit: BPF prog-id=79 op=UNLOAD Jan 20 15:10:26.169000 audit: BPF prog-id=134 op=LOAD Jan 20 15:10:26.169000 audit: BPF prog-id=83 op=UNLOAD Jan 20 15:10:26.169000 audit: BPF prog-id=135 op=LOAD Jan 20 15:10:26.169000 audit: BPF prog-id=136 op=LOAD Jan 20 15:10:26.169000 audit: BPF prog-id=84 op=UNLOAD Jan 20 15:10:26.169000 audit: BPF prog-id=85 op=UNLOAD Jan 20 15:10:26.439383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 15:10:26.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:26.466416 (kubelet)[2881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 15:10:26.545121 kubelet[2881]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:10:26.545121 kubelet[2881]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 15:10:26.545121 kubelet[2881]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 15:10:26.545121 kubelet[2881]: I0120 15:10:26.545108 2881 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 15:10:26.554464 kubelet[2881]: I0120 15:10:26.554360 2881 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 15:10:26.554464 kubelet[2881]: I0120 15:10:26.554390 2881 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 15:10:26.554798 kubelet[2881]: I0120 15:10:26.554701 2881 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 15:10:26.556290 kubelet[2881]: I0120 15:10:26.556177 2881 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 15:10:26.561687 kubelet[2881]: I0120 15:10:26.561574 2881 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 15:10:26.570678 kubelet[2881]: I0120 15:10:26.570282 2881 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 15:10:26.577293 kubelet[2881]: I0120 15:10:26.576951 2881 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 15:10:26.577437 kubelet[2881]: I0120 15:10:26.577370 2881 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 15:10:26.577653 kubelet[2881]: I0120 15:10:26.577394 2881 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 15:10:26.578089 kubelet[2881]: I0120 15:10:26.577688 2881 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 15:10:26.578089 kubelet[2881]: I0120 15:10:26.577701 2881 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 15:10:26.578089 kubelet[2881]: I0120 15:10:26.577749 2881 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:10:26.578089 kubelet[2881]: I0120 15:10:26.577983 2881 kubelet.go:446] "Attempting to sync node with API server" Jan 20 15:10:26.578089 kubelet[2881]: I0120 15:10:26.578010 2881 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 15:10:26.578089 kubelet[2881]: I0120 15:10:26.578029 2881 kubelet.go:352] "Adding apiserver pod source" Jan 20 15:10:26.578089 kubelet[2881]: I0120 15:10:26.578039 2881 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 15:10:26.581913 kubelet[2881]: I0120 15:10:26.581533 2881 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 15:10:26.582335 kubelet[2881]: I0120 15:10:26.582193 2881 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 15:10:26.583129 kubelet[2881]: I0120 15:10:26.583053 2881 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 15:10:26.583392 kubelet[2881]: I0120 15:10:26.583183 2881 server.go:1287] "Started kubelet" Jan 20 15:10:26.583691 kubelet[2881]: I0120 15:10:26.583416 2881 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 15:10:26.584526 kubelet[2881]: I0120 15:10:26.584326 2881 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 15:10:26.584779 kubelet[2881]: I0120 15:10:26.584697 2881 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 15:10:26.588225 kubelet[2881]: I0120 15:10:26.588149 2881 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 15:10:26.592743 kubelet[2881]: I0120 15:10:26.592691 2881 server.go:479] "Adding debug handlers to kubelet server" Jan 20 15:10:26.595945 kubelet[2881]: I0120 15:10:26.594995 2881 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 15:10:26.595945 kubelet[2881]: I0120 15:10:26.595406 2881 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 15:10:26.595945 kubelet[2881]: I0120 15:10:26.595784 2881 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 15:10:26.600029 kubelet[2881]: I0120 15:10:26.598262 2881 reconciler.go:26] "Reconciler: start to sync state" Jan 20 15:10:26.600029 kubelet[2881]: E0120 15:10:26.599389 2881 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 15:10:26.610056 kubelet[2881]: I0120 15:10:26.610030 2881 factory.go:221] Registration of the containerd container factory successfully Jan 20 15:10:26.610056 kubelet[2881]: I0120 15:10:26.610052 2881 factory.go:221] Registration of the systemd container factory successfully Jan 20 15:10:26.611058 kubelet[2881]: I0120 15:10:26.610299 2881 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 15:10:26.627031 kubelet[2881]: I0120 15:10:26.624177 2881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 15:10:26.630912 kubelet[2881]: I0120 15:10:26.630726 2881 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 15:10:26.630912 kubelet[2881]: I0120 15:10:26.630782 2881 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 15:10:26.630912 kubelet[2881]: I0120 15:10:26.630801 2881 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 15:10:26.630912 kubelet[2881]: I0120 15:10:26.630809 2881 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 15:10:26.631090 kubelet[2881]: E0120 15:10:26.630918 2881 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685084 2881 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685103 2881 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685120 2881 state_mem.go:36] "Initialized new in-memory state store" Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685261 2881 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685275 2881 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685298 2881 policy_none.go:49] "None policy: Start" Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685311 2881 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 15:10:26.685402 kubelet[2881]: I0120 15:10:26.685325 2881 state_mem.go:35] "Initializing new in-memory state store" Jan 20 15:10:26.685736 kubelet[2881]: I0120 15:10:26.685454 2881 state_mem.go:75] "Updated machine memory state" Jan 20 15:10:26.694243 kubelet[2881]: I0120 15:10:26.694102 2881 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 15:10:26.694380 kubelet[2881]: I0120 15:10:26.694279 2881 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 15:10:26.694380 kubelet[2881]: I0120 15:10:26.694291 2881 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 15:10:26.695509 kubelet[2881]: I0120 15:10:26.694783 2881 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 15:10:26.700083 kubelet[2881]: E0120 15:10:26.699778 2881 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 15:10:26.732035 kubelet[2881]: I0120 15:10:26.731782 2881 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:26.732035 kubelet[2881]: I0120 15:10:26.732042 2881 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 15:10:26.732285 kubelet[2881]: I0120 15:10:26.732120 2881 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:26.755108 kubelet[2881]: E0120 15:10:26.755000 2881 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:26.799355 kubelet[2881]: I0120 15:10:26.799039 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 20 15:10:26.799355 kubelet[2881]: I0120 15:10:26.799135 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aba91f0f1e98240a43c4a72aaa9767ee-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba91f0f1e98240a43c4a72aaa9767ee\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:26.799355 kubelet[2881]: I0120 15:10:26.799171 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:26.799355 kubelet[2881]: I0120 15:10:26.799196 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:26.799355 kubelet[2881]: I0120 15:10:26.799223 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:26.799732 kubelet[2881]: I0120 15:10:26.799244 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:26.799732 kubelet[2881]: I0120 15:10:26.799270 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:26.799732 kubelet[2881]: I0120 15:10:26.799297 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aba91f0f1e98240a43c4a72aaa9767ee-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aba91f0f1e98240a43c4a72aaa9767ee\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:26.799732 kubelet[2881]: I0120 15:10:26.799318 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aba91f0f1e98240a43c4a72aaa9767ee-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aba91f0f1e98240a43c4a72aaa9767ee\") " pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:26.811524 kubelet[2881]: I0120 15:10:26.811482 2881 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 15:10:26.831479 kubelet[2881]: I0120 15:10:26.831394 2881 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 15:10:26.831656 kubelet[2881]: I0120 15:10:26.831511 2881 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 15:10:27.047561 kubelet[2881]: E0120 15:10:27.047478 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:27.054161 kubelet[2881]: E0120 15:10:27.054002 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:27.055688 kubelet[2881]: E0120 15:10:27.055625 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:27.579713 kubelet[2881]: I0120 15:10:27.579647 2881 apiserver.go:52] "Watching apiserver" Jan 20 15:10:27.597883 kubelet[2881]: I0120 15:10:27.597690 2881 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 15:10:27.664277 kubelet[2881]: I0120 15:10:27.664213 2881 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:27.664537 kubelet[2881]: I0120 15:10:27.664520 2881 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:27.665922 kubelet[2881]: E0120 15:10:27.665102 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:27.679718 kubelet[2881]: E0120 15:10:27.679641 2881 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 20 15:10:27.679803 kubelet[2881]: E0120 15:10:27.679792 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:27.687153 kubelet[2881]: E0120 15:10:27.687098 2881 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 15:10:27.687279 kubelet[2881]: E0120 15:10:27.687205 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:27.703530 kubelet[2881]: I0120 15:10:27.703318 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.703302642 podStartE2EDuration="2.703302642s" podCreationTimestamp="2026-01-20 15:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:27.68741436 +0000 UTC m=+1.215844233" watchObservedRunningTime="2026-01-20 15:10:27.703302642 +0000 UTC m=+1.231732515" Jan 20 15:10:27.717172 kubelet[2881]: I0120 15:10:27.717113 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.717102375 podStartE2EDuration="1.717102375s" podCreationTimestamp="2026-01-20 15:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:27.704115042 +0000 UTC m=+1.232544905" watchObservedRunningTime="2026-01-20 15:10:27.717102375 +0000 UTC m=+1.245532248" Jan 20 15:10:27.717172 kubelet[2881]: I0120 15:10:27.717166 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.717161596 podStartE2EDuration="1.717161596s" podCreationTimestamp="2026-01-20 15:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:27.717000612 +0000 UTC m=+1.245430485" watchObservedRunningTime="2026-01-20 15:10:27.717161596 +0000 UTC m=+1.245591469" Jan 20 15:10:28.675489 kubelet[2881]: E0120 15:10:28.675199 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:28.675489 kubelet[2881]: E0120 15:10:28.675225 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:28.675489 kubelet[2881]: E0120 15:10:28.675297 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:29.678725 kubelet[2881]: E0120 15:10:29.678561 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:30.306991 kubelet[2881]: I0120 15:10:30.306792 2881 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 15:10:30.307831 containerd[1658]: time="2026-01-20T15:10:30.307654560Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 15:10:30.309007 kubelet[2881]: I0120 15:10:30.308951 2881 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 15:10:31.189680 systemd[1]: Created slice kubepods-besteffort-podae9d3754_d37b_4f2e_8799_5d74210bb69b.slice - libcontainer container kubepods-besteffort-podae9d3754_d37b_4f2e_8799_5d74210bb69b.slice. Jan 20 15:10:31.234964 kubelet[2881]: I0120 15:10:31.234511 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae9d3754-d37b-4f2e-8799-5d74210bb69b-kube-proxy\") pod \"kube-proxy-rx9sj\" (UID: \"ae9d3754-d37b-4f2e-8799-5d74210bb69b\") " pod="kube-system/kube-proxy-rx9sj" Jan 20 15:10:31.234964 kubelet[2881]: I0120 15:10:31.234686 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae9d3754-d37b-4f2e-8799-5d74210bb69b-xtables-lock\") pod \"kube-proxy-rx9sj\" (UID: \"ae9d3754-d37b-4f2e-8799-5d74210bb69b\") " pod="kube-system/kube-proxy-rx9sj" Jan 20 15:10:31.234964 kubelet[2881]: I0120 15:10:31.234779 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae9d3754-d37b-4f2e-8799-5d74210bb69b-lib-modules\") pod \"kube-proxy-rx9sj\" (UID: \"ae9d3754-d37b-4f2e-8799-5d74210bb69b\") " pod="kube-system/kube-proxy-rx9sj" Jan 20 15:10:31.234964 kubelet[2881]: I0120 15:10:31.234803 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zz7\" (UniqueName: \"kubernetes.io/projected/ae9d3754-d37b-4f2e-8799-5d74210bb69b-kube-api-access-c7zz7\") pod \"kube-proxy-rx9sj\" (UID: \"ae9d3754-d37b-4f2e-8799-5d74210bb69b\") " pod="kube-system/kube-proxy-rx9sj" Jan 20 15:10:31.415809 systemd[1]: Created slice kubepods-besteffort-pod7b006d32_61d6_47bb_aedc_164273b04b2b.slice - libcontainer container kubepods-besteffort-pod7b006d32_61d6_47bb_aedc_164273b04b2b.slice. Jan 20 15:10:31.441391 kubelet[2881]: I0120 15:10:31.441093 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjhk\" (UniqueName: \"kubernetes.io/projected/7b006d32-61d6-47bb-aedc-164273b04b2b-kube-api-access-sdjhk\") pod \"tigera-operator-7dcd859c48-n9fz4\" (UID: \"7b006d32-61d6-47bb-aedc-164273b04b2b\") " pod="tigera-operator/tigera-operator-7dcd859c48-n9fz4" Jan 20 15:10:31.441391 kubelet[2881]: I0120 15:10:31.441197 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b006d32-61d6-47bb-aedc-164273b04b2b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-n9fz4\" (UID: \"7b006d32-61d6-47bb-aedc-164273b04b2b\") " pod="tigera-operator/tigera-operator-7dcd859c48-n9fz4" Jan 20 15:10:31.498995 kubelet[2881]: E0120 15:10:31.498927 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:31.499734 containerd[1658]: time="2026-01-20T15:10:31.499474025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rx9sj,Uid:ae9d3754-d37b-4f2e-8799-5d74210bb69b,Namespace:kube-system,Attempt:0,}" Jan 20 15:10:31.592140 containerd[1658]: time="2026-01-20T15:10:31.592043799Z" level=info msg="connecting to shim 5d17812a984e7b22b397934daa3d87038f3c9a4e88650ec8773dd7501b84a88d" address="unix:///run/containerd/s/e0d92656b82950bde004f87fb4b17d11a5fdc8f997abeba6019b133e896197e9" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:10:31.658365 systemd[1]: Started cri-containerd-5d17812a984e7b22b397934daa3d87038f3c9a4e88650ec8773dd7501b84a88d.scope - libcontainer container 5d17812a984e7b22b397934daa3d87038f3c9a4e88650ec8773dd7501b84a88d. Jan 20 15:10:31.703013 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 15:10:31.703154 kernel: audit: type=1334 audit(1768921831.697:449): prog-id=137 op=LOAD Jan 20 15:10:31.697000 audit: BPF prog-id=137 op=LOAD Jan 20 15:10:31.698000 audit: BPF prog-id=138 op=LOAD Jan 20 15:10:31.710939 kernel: audit: type=1334 audit(1768921831.698:450): prog-id=138 op=LOAD Jan 20 15:10:31.698000 audit[2955]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.724971 kernel: audit: type=1300 audit(1768921831.698:450): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.726013 containerd[1658]: time="2026-01-20T15:10:31.725756739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n9fz4,Uid:7b006d32-61d6-47bb-aedc-164273b04b2b,Namespace:tigera-operator,Attempt:0,}" Jan 20 15:10:31.738949 kernel: audit: type=1327 audit(1768921831.698:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.698000 audit: BPF prog-id=138 op=UNLOAD Jan 20 15:10:31.698000 audit[2955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.758968 kernel: audit: type=1334 audit(1768921831.698:451): prog-id=138 op=UNLOAD Jan 20 15:10:31.759206 kernel: audit: type=1300 audit(1768921831.698:451): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.773988 kernel: audit: type=1327 audit(1768921831.698:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.698000 audit: BPF prog-id=139 op=LOAD Jan 20 15:10:31.778063 containerd[1658]: time="2026-01-20T15:10:31.776993813Z" level=info msg="connecting to shim 90a139126fa83224dc6089aa15e31509053eb594d9342dcc046db4efa035b385" address="unix:///run/containerd/s/62f4ee246b21173bb45f865a2fc315cba1111e7302912d3d0c41b93da8abea31" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:10:31.698000 audit[2955]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.792314 kernel: audit: type=1334 audit(1768921831.698:452): prog-id=139 op=LOAD Jan 20 15:10:31.793722 kernel: audit: type=1300 audit(1768921831.698:452): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.793792 kernel: audit: type=1327 audit(1768921831.698:452): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.698000 audit: BPF prog-id=140 op=LOAD Jan 20 15:10:31.698000 audit[2955]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.698000 audit: BPF prog-id=140 op=UNLOAD Jan 20 15:10:31.698000 audit[2955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.699000 audit: BPF prog-id=139 op=UNLOAD Jan 20 15:10:31.699000 audit[2955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.699000 audit: BPF prog-id=141 op=LOAD Jan 20 15:10:31.699000 audit[2955]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2943 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.699000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313738313261393834653762323262333937393334646161336438 Jan 20 15:10:31.814905 containerd[1658]: time="2026-01-20T15:10:31.814792201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rx9sj,Uid:ae9d3754-d37b-4f2e-8799-5d74210bb69b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d17812a984e7b22b397934daa3d87038f3c9a4e88650ec8773dd7501b84a88d\"" Jan 20 15:10:31.817247 kubelet[2881]: E0120 15:10:31.817174 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:31.856080 containerd[1658]: time="2026-01-20T15:10:31.855823897Z" level=info msg="CreateContainer within sandbox \"5d17812a984e7b22b397934daa3d87038f3c9a4e88650ec8773dd7501b84a88d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 15:10:31.871834 systemd[1]: Started cri-containerd-90a139126fa83224dc6089aa15e31509053eb594d9342dcc046db4efa035b385.scope - libcontainer container 90a139126fa83224dc6089aa15e31509053eb594d9342dcc046db4efa035b385. Jan 20 15:10:31.894173 containerd[1658]: time="2026-01-20T15:10:31.894117682Z" level=info msg="Container 6f9f533afb1c2bee33cc42104cf4143189cfacc6b929681ee7f934f7a0a0d15c: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:31.902000 audit: BPF prog-id=142 op=LOAD Jan 20 15:10:31.903000 audit: BPF prog-id=143 op=LOAD Jan 20 15:10:31.903000 audit[3001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2990 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930613133393132366661383332323464633630383961613135653331 Jan 20 15:10:31.905000 audit: BPF prog-id=143 op=UNLOAD Jan 20 15:10:31.905000 audit[3001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930613133393132366661383332323464633630383961613135653331 Jan 20 15:10:31.905000 audit: BPF prog-id=144 op=LOAD Jan 20 15:10:31.905000 audit[3001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2990 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930613133393132366661383332323464633630383961613135653331 Jan 20 15:10:31.905000 audit: BPF prog-id=145 op=LOAD Jan 20 15:10:31.905000 audit[3001]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2990 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930613133393132366661383332323464633630383961613135653331 Jan 20 15:10:31.905000 audit: BPF prog-id=145 op=UNLOAD Jan 20 15:10:31.905000 audit[3001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930613133393132366661383332323464633630383961613135653331 Jan 20 15:10:31.905000 audit: BPF prog-id=144 op=UNLOAD Jan 20 15:10:31.905000 audit[3001]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930613133393132366661383332323464633630383961613135653331 Jan 20 15:10:31.905000 audit: BPF prog-id=146 op=LOAD Jan 20 15:10:31.905000 audit[3001]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2990 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:31.905000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930613133393132366661383332323464633630383961613135653331 Jan 20 15:10:31.909076 containerd[1658]: time="2026-01-20T15:10:31.909043429Z" level=info msg="CreateContainer within sandbox \"5d17812a984e7b22b397934daa3d87038f3c9a4e88650ec8773dd7501b84a88d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f9f533afb1c2bee33cc42104cf4143189cfacc6b929681ee7f934f7a0a0d15c\"" Jan 20 15:10:31.911315 containerd[1658]: time="2026-01-20T15:10:31.911135543Z" level=info msg="StartContainer for \"6f9f533afb1c2bee33cc42104cf4143189cfacc6b929681ee7f934f7a0a0d15c\"" Jan 20 15:10:31.919719 containerd[1658]: time="2026-01-20T15:10:31.919596308Z" level=info msg="connecting to shim 6f9f533afb1c2bee33cc42104cf4143189cfacc6b929681ee7f934f7a0a0d15c" address="unix:///run/containerd/s/e0d92656b82950bde004f87fb4b17d11a5fdc8f997abeba6019b133e896197e9" protocol=ttrpc version=3 Jan 20 15:10:32.035658 systemd[1]: Started cri-containerd-6f9f533afb1c2bee33cc42104cf4143189cfacc6b929681ee7f934f7a0a0d15c.scope - libcontainer container 6f9f533afb1c2bee33cc42104cf4143189cfacc6b929681ee7f934f7a0a0d15c. Jan 20 15:10:32.054592 containerd[1658]: time="2026-01-20T15:10:32.054527368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n9fz4,Uid:7b006d32-61d6-47bb-aedc-164273b04b2b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"90a139126fa83224dc6089aa15e31509053eb594d9342dcc046db4efa035b385\"" Jan 20 15:10:32.059644 containerd[1658]: time="2026-01-20T15:10:32.059298082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 15:10:32.124000 audit: BPF prog-id=147 op=LOAD Jan 20 15:10:32.124000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2943 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396635333361666231633262656533336363343231303463663431 Jan 20 15:10:32.124000 audit: BPF prog-id=148 op=LOAD Jan 20 15:10:32.124000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2943 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396635333361666231633262656533336363343231303463663431 Jan 20 15:10:32.124000 audit: BPF prog-id=148 op=UNLOAD Jan 20 15:10:32.124000 audit[3021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2943 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396635333361666231633262656533336363343231303463663431 Jan 20 15:10:32.124000 audit: BPF prog-id=147 op=UNLOAD Jan 20 15:10:32.124000 audit[3021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2943 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396635333361666231633262656533336363343231303463663431 Jan 20 15:10:32.124000 audit: BPF prog-id=149 op=LOAD Jan 20 15:10:32.124000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2943 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396635333361666231633262656533336363343231303463663431 Jan 20 15:10:32.162386 containerd[1658]: time="2026-01-20T15:10:32.162251079Z" level=info msg="StartContainer for \"6f9f533afb1c2bee33cc42104cf4143189cfacc6b929681ee7f934f7a0a0d15c\" returns successfully" Jan 20 15:10:32.527000 audit[3093]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.527000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb65bef70 a2=0 a3=7ffeb65bef5c items=0 ppid=3041 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 15:10:32.532000 audit[3092]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.532000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcacd2f3a0 a2=0 a3=7ffcacd2f38c items=0 ppid=3041 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 15:10:32.536000 audit[3094]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.536000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd21d301d0 a2=0 a3=7ffd21d301bc items=0 ppid=3041 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 15:10:32.541000 audit[3096]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.541000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa7350fa0 a2=0 a3=7fffa7350f8c items=0 ppid=3041 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.541000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 15:10:32.544000 audit[3095]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.544000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9a441e60 a2=0 a3=7ffe9a441e4c items=0 ppid=3041 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 15:10:32.547000 audit[3098]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.547000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa681cdb0 a2=0 a3=7fffa681cd9c items=0 ppid=3041 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 15:10:32.636000 audit[3099]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.636000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc4cf15d00 a2=0 a3=7ffc4cf15cec items=0 ppid=3041 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 15:10:32.643000 audit[3101]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.643000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffcc956ba0 a2=0 a3=7fffcc956b8c items=0 ppid=3041 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 15:10:32.651000 audit[3104]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.651000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe35a27920 a2=0 a3=7ffe35a2790c items=0 ppid=3041 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.651000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 15:10:32.654000 audit[3105]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.654000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe02a2aad0 a2=0 a3=7ffe02a2aabc items=0 ppid=3041 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.654000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 15:10:32.661000 audit[3107]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.661000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe39b30570 a2=0 a3=7ffe39b3055c items=0 ppid=3041 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 15:10:32.664000 audit[3108]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.664000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef03419c0 a2=0 a3=7ffef03419ac items=0 ppid=3041 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 15:10:32.671000 audit[3110]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.671000 audit[3110]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4b61e5e0 a2=0 a3=7fff4b61e5cc items=0 ppid=3041 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 15:10:32.681000 audit[3113]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.681000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc04b051e0 a2=0 a3=7ffc04b051cc items=0 ppid=3041 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 15:10:32.684000 audit[3114]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.684000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe952db450 a2=0 a3=7ffe952db43c items=0 ppid=3041 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 15:10:32.691593 kubelet[2881]: E0120 15:10:32.691484 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:32.692000 audit[3116]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.692000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff09bf2df0 a2=0 a3=7fff09bf2ddc items=0 ppid=3041 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 15:10:32.696000 audit[3117]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.696000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8f66be50 a2=0 a3=7fff8f66be3c items=0 ppid=3041 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.696000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 15:10:32.708000 audit[3119]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.708000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffba0604f0 a2=0 a3=7fffba0604dc items=0 ppid=3041 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 15:10:32.720000 audit[3122]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.720000 audit[3122]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe88543fa0 a2=0 a3=7ffe88543f8c items=0 ppid=3041 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 15:10:32.736000 audit[3129]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.736000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcaa065e90 a2=0 a3=7ffcaa065e7c items=0 ppid=3041 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 15:10:32.740000 audit[3130]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.740000 audit[3130]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee27fa2b0 a2=0 a3=7ffee27fa29c items=0 ppid=3041 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 15:10:32.747000 audit[3132]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.747000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffedae9cdb0 a2=0 a3=7ffedae9cd9c items=0 ppid=3041 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 15:10:32.763000 audit[3135]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.763000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd60051450 a2=0 a3=7ffd6005143c items=0 ppid=3041 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 15:10:32.770000 audit[3136]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.770000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe032a5cf0 a2=0 a3=7ffe032a5cdc items=0 ppid=3041 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 15:10:32.787000 audit[3138]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 15:10:32.787000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe72e52510 a2=0 a3=7ffe72e524fc items=0 ppid=3041 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 15:10:32.813111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32843517.mount: Deactivated successfully. Jan 20 15:10:32.839000 audit[3144]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:32.839000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd55ba1730 a2=0 a3=7ffd55ba171c items=0 ppid=3041 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.839000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:32.856000 audit[3144]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:32.856000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd55ba1730 a2=0 a3=7ffd55ba171c items=0 ppid=3041 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.856000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:32.859000 audit[3149]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.859000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc61fa9a50 a2=0 a3=7ffc61fa9a3c items=0 ppid=3041 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 15:10:32.866000 audit[3151]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.866000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffc211ca30 a2=0 a3=7fffc211ca1c items=0 ppid=3041 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 15:10:32.876000 audit[3154]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.876000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe2dd6a730 a2=0 a3=7ffe2dd6a71c items=0 ppid=3041 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 15:10:32.879000 audit[3155]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.879000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff54113c80 a2=0 a3=7fff54113c6c items=0 ppid=3041 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 15:10:32.887000 audit[3157]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.887000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe486f8000 a2=0 a3=7ffe486f7fec items=0 ppid=3041 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 15:10:32.889000 audit[3158]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.889000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddd4fde90 a2=0 a3=7ffddd4fde7c items=0 ppid=3041 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 15:10:32.897000 audit[3160]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.897000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff97f14200 a2=0 a3=7fff97f141ec items=0 ppid=3041 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 15:10:32.907000 audit[3163]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.907000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffeb3bbca50 a2=0 a3=7ffeb3bbca3c items=0 ppid=3041 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 15:10:32.910000 audit[3164]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.910000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc38ba190 a2=0 a3=7fffc38ba17c items=0 ppid=3041 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 15:10:32.916000 audit[3166]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.916000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb15f1e30 a2=0 a3=7ffcb15f1e1c items=0 ppid=3041 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 15:10:32.919000 audit[3167]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.919000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1ee736c0 a2=0 a3=7ffe1ee736ac items=0 ppid=3041 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 15:10:32.926000 audit[3169]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.926000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6ae47450 a2=0 a3=7ffc6ae4743c items=0 ppid=3041 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 15:10:32.937000 audit[3172]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.937000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9a8f8890 a2=0 a3=7fff9a8f887c items=0 ppid=3041 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 15:10:32.947000 audit[3175]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.947000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3f9f5340 a2=0 a3=7ffc3f9f532c items=0 ppid=3041 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 15:10:32.951000 audit[3176]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.951000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcaa6eda80 a2=0 a3=7ffcaa6eda6c items=0 ppid=3041 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 15:10:32.957000 audit[3178]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.957000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe501b65d0 a2=0 a3=7ffe501b65bc items=0 ppid=3041 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 15:10:32.970000 audit[3181]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.970000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf880ec50 a2=0 a3=7ffdf880ec3c items=0 ppid=3041 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 15:10:32.973000 audit[3182]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.973000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9c449b70 a2=0 a3=7ffc9c449b5c items=0 ppid=3041 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 15:10:32.979000 audit[3184]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.979000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd10b45da0 a2=0 a3=7ffd10b45d8c items=0 ppid=3041 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 15:10:32.982000 audit[3185]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.982000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea7db8810 a2=0 a3=7ffea7db87fc items=0 ppid=3041 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 15:10:32.990000 audit[3187]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:32.990000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff24d9ee50 a2=0 a3=7fff24d9ee3c items=0 ppid=3041 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:32.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 15:10:33.000000 audit[3190]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 15:10:33.000000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd83120420 a2=0 a3=7ffd8312040c items=0 ppid=3041 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:33.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 15:10:33.007000 audit[3192]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 15:10:33.007000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff5fba6d30 a2=0 a3=7fff5fba6d1c items=0 ppid=3041 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:33.007000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:33.008000 audit[3192]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 15:10:33.008000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff5fba6d30 a2=0 a3=7fff5fba6d1c items=0 ppid=3041 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:33.008000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:33.648260 kubelet[2881]: E0120 15:10:33.647987 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:33.707425 kubelet[2881]: E0120 15:10:33.707192 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:33.720127 kubelet[2881]: I0120 15:10:33.720029 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rx9sj" podStartSLOduration=2.720006526 podStartE2EDuration="2.720006526s" podCreationTimestamp="2026-01-20 15:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:10:32.717064864 +0000 UTC m=+6.245494738" watchObservedRunningTime="2026-01-20 15:10:33.720006526 +0000 UTC m=+7.248436399" Jan 20 15:10:34.295584 containerd[1658]: time="2026-01-20T15:10:34.295191596Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:34.305019 containerd[1658]: time="2026-01-20T15:10:34.298348792Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 20 15:10:34.308728 containerd[1658]: time="2026-01-20T15:10:34.307306913Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:34.338179 containerd[1658]: time="2026-01-20T15:10:34.337984253Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:34.339146 containerd[1658]: time="2026-01-20T15:10:34.339113306Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.279770972s" Jan 20 15:10:34.339146 containerd[1658]: time="2026-01-20T15:10:34.339158621Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 15:10:34.342670 containerd[1658]: time="2026-01-20T15:10:34.342507043Z" level=info msg="CreateContainer within sandbox \"90a139126fa83224dc6089aa15e31509053eb594d9342dcc046db4efa035b385\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 15:10:34.370003 containerd[1658]: time="2026-01-20T15:10:34.369783906Z" level=info msg="Container b26004be1530029e6feab0c35fe1d9295fa85738ccc91fa8fc0ea6f77aea17fe: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:34.379674 containerd[1658]: time="2026-01-20T15:10:34.379509848Z" level=info msg="CreateContainer within sandbox \"90a139126fa83224dc6089aa15e31509053eb594d9342dcc046db4efa035b385\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b26004be1530029e6feab0c35fe1d9295fa85738ccc91fa8fc0ea6f77aea17fe\"" Jan 20 15:10:34.380794 containerd[1658]: time="2026-01-20T15:10:34.380728210Z" level=info msg="StartContainer for \"b26004be1530029e6feab0c35fe1d9295fa85738ccc91fa8fc0ea6f77aea17fe\"" Jan 20 15:10:34.382350 containerd[1658]: time="2026-01-20T15:10:34.382234888Z" level=info msg="connecting to shim b26004be1530029e6feab0c35fe1d9295fa85738ccc91fa8fc0ea6f77aea17fe" address="unix:///run/containerd/s/62f4ee246b21173bb45f865a2fc315cba1111e7302912d3d0c41b93da8abea31" protocol=ttrpc version=3 Jan 20 15:10:34.420194 systemd[1]: Started cri-containerd-b26004be1530029e6feab0c35fe1d9295fa85738ccc91fa8fc0ea6f77aea17fe.scope - libcontainer container b26004be1530029e6feab0c35fe1d9295fa85738ccc91fa8fc0ea6f77aea17fe. Jan 20 15:10:34.442000 audit: BPF prog-id=150 op=LOAD Jan 20 15:10:34.443000 audit: BPF prog-id=151 op=LOAD Jan 20 15:10:34.443000 audit[3197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2990 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:34.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232363030346265313533303032396536666561623063333566653164 Jan 20 15:10:34.443000 audit: BPF prog-id=151 op=UNLOAD Jan 20 15:10:34.443000 audit[3197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:34.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232363030346265313533303032396536666561623063333566653164 Jan 20 15:10:34.443000 audit: BPF prog-id=152 op=LOAD Jan 20 15:10:34.443000 audit[3197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2990 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:34.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232363030346265313533303032396536666561623063333566653164 Jan 20 15:10:34.443000 audit: BPF prog-id=153 op=LOAD Jan 20 15:10:34.443000 audit[3197]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2990 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:34.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232363030346265313533303032396536666561623063333566653164 Jan 20 15:10:34.443000 audit: BPF prog-id=153 op=UNLOAD Jan 20 15:10:34.443000 audit[3197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:34.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232363030346265313533303032396536666561623063333566653164 Jan 20 15:10:34.443000 audit: BPF prog-id=152 op=UNLOAD Jan 20 15:10:34.443000 audit[3197]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2990 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:34.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232363030346265313533303032396536666561623063333566653164 Jan 20 15:10:34.443000 audit: BPF prog-id=154 op=LOAD Jan 20 15:10:34.443000 audit[3197]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2990 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:34.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232363030346265313533303032396536666561623063333566653164 Jan 20 15:10:34.489336 containerd[1658]: time="2026-01-20T15:10:34.489250543Z" level=info msg="StartContainer for \"b26004be1530029e6feab0c35fe1d9295fa85738ccc91fa8fc0ea6f77aea17fe\" returns successfully" Jan 20 15:10:34.732034 kubelet[2881]: I0120 15:10:34.730754 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-n9fz4" podStartSLOduration=1.448634137 podStartE2EDuration="3.730731933s" podCreationTimestamp="2026-01-20 15:10:31 +0000 UTC" firstStartedPulling="2026-01-20 15:10:32.05813784 +0000 UTC m=+5.586567713" lastFinishedPulling="2026-01-20 15:10:34.340235635 +0000 UTC m=+7.868665509" observedRunningTime="2026-01-20 15:10:34.728405058 +0000 UTC m=+8.256834941" watchObservedRunningTime="2026-01-20 15:10:34.730731933 +0000 UTC m=+8.259161806" Jan 20 15:10:34.981156 kubelet[2881]: E0120 15:10:34.981003 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:35.713284 kubelet[2881]: E0120 15:10:35.712834 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:41.477157 sudo[1878]: pam_unix(sudo:session): session closed for user root Jan 20 15:10:41.476000 audit[1878]: USER_END pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:10:41.479949 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 15:10:41.480053 kernel: audit: type=1106 audit(1768921841.476:529): pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:10:41.476000 audit[1878]: CRED_DISP pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:10:41.491367 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Jan 20 15:10:41.492359 sshd[1877]: Connection closed by 10.0.0.1 port 52530 Jan 20 15:10:41.498283 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:52530.service: Deactivated successfully. Jan 20 15:10:41.499033 kernel: audit: type=1104 audit(1768921841.476:530): pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 15:10:41.499085 kernel: audit: type=1106 audit(1768921841.493:531): pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:10:41.493000 audit[1873]: USER_END pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:10:41.502202 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 15:10:41.502711 systemd[1]: session-8.scope: Consumed 10.214s CPU time, 197.5M memory peak. Jan 20 15:10:41.505243 systemd-logind[1631]: Session 8 logged out. Waiting for processes to exit. Jan 20 15:10:41.507449 systemd-logind[1631]: Removed session 8. Jan 20 15:10:41.494000 audit[1873]: CRED_DISP pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:10:41.529977 kernel: audit: type=1104 audit(1768921841.494:532): pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:10:41.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:52530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:41.544972 kernel: audit: type=1131 audit(1768921841.498:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.116:22-10.0.0.1:52530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:10:41.910000 audit[3289]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:41.921151 kernel: audit: type=1325 audit(1768921841.910:534): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:41.910000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff18d783e0 a2=0 a3=7fff18d783cc items=0 ppid=3041 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:41.940975 kernel: audit: type=1300 audit(1768921841.910:534): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff18d783e0 a2=0 a3=7fff18d783cc items=0 ppid=3041 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:41.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:41.952000 audit[3289]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:41.962826 kernel: audit: type=1327 audit(1768921841.910:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:41.963081 kernel: audit: type=1325 audit(1768921841.952:535): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:41.952000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff18d783e0 a2=0 a3=0 items=0 ppid=3041 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:41.979931 kernel: audit: type=1300 audit(1768921841.952:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff18d783e0 a2=0 a3=0 items=0 ppid=3041 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:41.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:42.003000 audit[3291]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:42.003000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd14cd0aa0 a2=0 a3=7ffd14cd0a8c items=0 ppid=3041 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:42.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:42.012000 audit[3291]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:42.012000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd14cd0aa0 a2=0 a3=0 items=0 ppid=3041 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:42.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:43.939000 audit[3293]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:43.939000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffee7e5fa0 a2=0 a3=7fffee7e5f8c items=0 ppid=3041 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:43.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:43.951000 audit[3293]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:43.951000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffee7e5fa0 a2=0 a3=0 items=0 ppid=3041 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:43.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:43.980000 audit[3295]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:43.980000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe9980f6a0 a2=0 a3=7ffe9980f68c items=0 ppid=3041 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:43.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:43.990000 audit[3295]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:43.990000 audit[3295]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe9980f6a0 a2=0 a3=0 items=0 ppid=3041 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:43.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:45.010000 audit[3297]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:45.010000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe48857fa0 a2=0 a3=7ffe48857f8c items=0 ppid=3041 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:45.010000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:45.014000 audit[3297]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:45.014000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe48857fa0 a2=0 a3=0 items=0 ppid=3041 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:45.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:45.894755 systemd[1]: Created slice kubepods-besteffort-pod402a28a4_562b_4205_b590_fb255e441659.slice - libcontainer container kubepods-besteffort-pod402a28a4_562b_4205_b590_fb255e441659.slice. Jan 20 15:10:45.980472 kubelet[2881]: I0120 15:10:45.980404 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/402a28a4-562b-4205-b590-fb255e441659-typha-certs\") pod \"calico-typha-7d4c556976-j6zgk\" (UID: \"402a28a4-562b-4205-b590-fb255e441659\") " pod="calico-system/calico-typha-7d4c556976-j6zgk" Jan 20 15:10:45.980472 kubelet[2881]: I0120 15:10:45.980448 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/402a28a4-562b-4205-b590-fb255e441659-tigera-ca-bundle\") pod \"calico-typha-7d4c556976-j6zgk\" (UID: \"402a28a4-562b-4205-b590-fb255e441659\") " pod="calico-system/calico-typha-7d4c556976-j6zgk" Jan 20 15:10:45.980472 kubelet[2881]: I0120 15:10:45.980470 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xfz6\" (UniqueName: \"kubernetes.io/projected/402a28a4-562b-4205-b590-fb255e441659-kube-api-access-5xfz6\") pod \"calico-typha-7d4c556976-j6zgk\" (UID: \"402a28a4-562b-4205-b590-fb255e441659\") " pod="calico-system/calico-typha-7d4c556976-j6zgk" Jan 20 15:10:45.995000 audit[3300]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:45.995000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe7da740d0 a2=0 a3=7ffe7da740bc items=0 ppid=3041 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:45.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:46.003000 audit[3300]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:46.003000 audit[3300]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7da740d0 a2=0 a3=0 items=0 ppid=3041 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:46.050755 systemd[1]: Created slice kubepods-besteffort-pode8090ca6_1f98_4a25_af56_52533a3f0a29.slice - libcontainer container kubepods-besteffort-pode8090ca6_1f98_4a25_af56_52533a3f0a29.slice. Jan 20 15:10:46.080901 kubelet[2881]: I0120 15:10:46.080769 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-cni-bin-dir\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.080901 kubelet[2881]: I0120 15:10:46.080906 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvhvf\" (UniqueName: \"kubernetes.io/projected/e8090ca6-1f98-4a25-af56-52533a3f0a29-kube-api-access-tvhvf\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081038 kubelet[2881]: I0120 15:10:46.080930 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-cni-net-dir\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081038 kubelet[2881]: I0120 15:10:46.080948 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8090ca6-1f98-4a25-af56-52533a3f0a29-tigera-ca-bundle\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081038 kubelet[2881]: I0120 15:10:46.080964 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-var-lib-calico\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081038 kubelet[2881]: I0120 15:10:46.080990 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-flexvol-driver-host\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081038 kubelet[2881]: I0120 15:10:46.081008 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-policysync\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081214 kubelet[2881]: I0120 15:10:46.081025 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e8090ca6-1f98-4a25-af56-52533a3f0a29-node-certs\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081214 kubelet[2881]: I0120 15:10:46.081040 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-var-run-calico\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081214 kubelet[2881]: I0120 15:10:46.081057 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-cni-log-dir\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081214 kubelet[2881]: I0120 15:10:46.081074 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-lib-modules\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.081214 kubelet[2881]: I0120 15:10:46.081089 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8090ca6-1f98-4a25-af56-52533a3f0a29-xtables-lock\") pod \"calico-node-vkjpc\" (UID: \"e8090ca6-1f98-4a25-af56-52533a3f0a29\") " pod="calico-system/calico-node-vkjpc" Jan 20 15:10:46.185364 kubelet[2881]: E0120 15:10:46.184809 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.185887 kubelet[2881]: W0120 15:10:46.185593 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.186038 kubelet[2881]: E0120 15:10:46.186017 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.193288 kubelet[2881]: E0120 15:10:46.193189 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.193348 kubelet[2881]: W0120 15:10:46.193299 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.193348 kubelet[2881]: E0120 15:10:46.193327 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.201939 kubelet[2881]: E0120 15:10:46.201825 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:46.204513 containerd[1658]: time="2026-01-20T15:10:46.204444572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d4c556976-j6zgk,Uid:402a28a4-562b-4205-b590-fb255e441659,Namespace:calico-system,Attempt:0,}" Jan 20 15:10:46.207354 kubelet[2881]: E0120 15:10:46.207261 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.207354 kubelet[2881]: W0120 15:10:46.207288 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.207354 kubelet[2881]: E0120 15:10:46.207311 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.256181 containerd[1658]: time="2026-01-20T15:10:46.256074182Z" level=info msg="connecting to shim 47a149b0c686ace22acc322c786ba0020a95d3688ffdc7d9915834a93a32e5fc" address="unix:///run/containerd/s/1325026c2ffad42a76c1409adf61dfcb63e4707833e10bc77f26da717b91baa4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:10:46.269048 kubelet[2881]: E0120 15:10:46.266919 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:10:46.283470 kubelet[2881]: E0120 15:10:46.283436 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.283628 kubelet[2881]: W0120 15:10:46.283609 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.283926 kubelet[2881]: E0120 15:10:46.283771 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.285929 kubelet[2881]: E0120 15:10:46.284632 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.285929 kubelet[2881]: W0120 15:10:46.284648 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.285929 kubelet[2881]: E0120 15:10:46.284724 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.286552 kubelet[2881]: E0120 15:10:46.286536 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.287018 kubelet[2881]: W0120 15:10:46.286625 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.287018 kubelet[2881]: E0120 15:10:46.286649 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.288959 kubelet[2881]: E0120 15:10:46.288828 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.289135 kubelet[2881]: W0120 15:10:46.289117 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.289396 kubelet[2881]: E0120 15:10:46.289210 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.290986 kubelet[2881]: E0120 15:10:46.290966 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.292024 kubelet[2881]: W0120 15:10:46.291991 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.292981 kubelet[2881]: E0120 15:10:46.292418 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.293825 kubelet[2881]: I0120 15:10:46.293739 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4e14075-1569-42bc-b38f-776a269a4fcd-kubelet-dir\") pod \"csi-node-driver-jr4nz\" (UID: \"c4e14075-1569-42bc-b38f-776a269a4fcd\") " pod="calico-system/csi-node-driver-jr4nz" Jan 20 15:10:46.294923 kubelet[2881]: E0120 15:10:46.294832 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.294984 kubelet[2881]: W0120 15:10:46.294923 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.295152 kubelet[2881]: E0120 15:10:46.295078 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.296837 kubelet[2881]: E0120 15:10:46.296819 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.297326 kubelet[2881]: W0120 15:10:46.297211 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.298067 kubelet[2881]: E0120 15:10:46.298009 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.300408 kubelet[2881]: E0120 15:10:46.300217 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.300648 kubelet[2881]: W0120 15:10:46.300509 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.301347 kubelet[2881]: E0120 15:10:46.301267 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.302375 kubelet[2881]: E0120 15:10:46.302153 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.302818 kubelet[2881]: W0120 15:10:46.302478 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.303578 kubelet[2881]: E0120 15:10:46.303473 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.305178 kubelet[2881]: E0120 15:10:46.305025 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.305223 kubelet[2881]: W0120 15:10:46.305141 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.305261 kubelet[2881]: E0120 15:10:46.305248 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.306057 kubelet[2881]: E0120 15:10:46.305993 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.306510 kubelet[2881]: W0120 15:10:46.306378 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.306510 kubelet[2881]: E0120 15:10:46.306466 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.308786 kubelet[2881]: E0120 15:10:46.308623 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.309169 kubelet[2881]: W0120 15:10:46.309105 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.309169 kubelet[2881]: E0120 15:10:46.309127 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.315389 kubelet[2881]: E0120 15:10:46.314787 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.316281 kubelet[2881]: W0120 15:10:46.315790 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.317919 kubelet[2881]: E0120 15:10:46.316587 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.322636 kubelet[2881]: E0120 15:10:46.322192 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.322636 kubelet[2881]: W0120 15:10:46.322349 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.322636 kubelet[2881]: E0120 15:10:46.322451 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.325488 kubelet[2881]: E0120 15:10:46.325275 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.325488 kubelet[2881]: W0120 15:10:46.325291 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.325966 kubelet[2881]: E0120 15:10:46.325616 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.331066 kubelet[2881]: E0120 15:10:46.330804 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.331066 kubelet[2881]: W0120 15:10:46.330822 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.332402 kubelet[2881]: E0120 15:10:46.332376 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.334539 kubelet[2881]: E0120 15:10:46.334308 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.334539 kubelet[2881]: W0120 15:10:46.334344 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.334539 kubelet[2881]: E0120 15:10:46.334452 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.337777 kubelet[2881]: E0120 15:10:46.337621 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.337777 kubelet[2881]: W0120 15:10:46.337641 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.337777 kubelet[2881]: E0120 15:10:46.337703 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.338406 kubelet[2881]: E0120 15:10:46.338300 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.338406 kubelet[2881]: W0120 15:10:46.338316 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.338406 kubelet[2881]: E0120 15:10:46.338329 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.339268 kubelet[2881]: E0120 15:10:46.339133 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.339769 kubelet[2881]: W0120 15:10:46.339435 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.339769 kubelet[2881]: E0120 15:10:46.339717 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.340304 kubelet[2881]: E0120 15:10:46.340290 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.340377 kubelet[2881]: W0120 15:10:46.340364 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.340452 kubelet[2881]: E0120 15:10:46.340439 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.341080 kubelet[2881]: E0120 15:10:46.341000 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.341080 kubelet[2881]: W0120 15:10:46.341015 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.341080 kubelet[2881]: E0120 15:10:46.341026 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.341393 kubelet[2881]: E0120 15:10:46.341380 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.341550 kubelet[2881]: W0120 15:10:46.341456 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.341550 kubelet[2881]: E0120 15:10:46.341472 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.354794 kubelet[2881]: E0120 15:10:46.354723 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:46.356528 containerd[1658]: time="2026-01-20T15:10:46.356413562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vkjpc,Uid:e8090ca6-1f98-4a25-af56-52533a3f0a29,Namespace:calico-system,Attempt:0,}" Jan 20 15:10:46.357728 systemd[1]: Started cri-containerd-47a149b0c686ace22acc322c786ba0020a95d3688ffdc7d9915834a93a32e5fc.scope - libcontainer container 47a149b0c686ace22acc322c786ba0020a95d3688ffdc7d9915834a93a32e5fc. Jan 20 15:10:46.396287 kubelet[2881]: E0120 15:10:46.396255 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.396827 kubelet[2881]: W0120 15:10:46.396424 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.396827 kubelet[2881]: E0120 15:10:46.396455 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.396827 kubelet[2881]: I0120 15:10:46.396493 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdjx4\" (UniqueName: \"kubernetes.io/projected/c4e14075-1569-42bc-b38f-776a269a4fcd-kube-api-access-hdjx4\") pod \"csi-node-driver-jr4nz\" (UID: \"c4e14075-1569-42bc-b38f-776a269a4fcd\") " pod="calico-system/csi-node-driver-jr4nz" Jan 20 15:10:46.398462 kubelet[2881]: E0120 15:10:46.398205 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.398462 kubelet[2881]: W0120 15:10:46.398265 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.398462 kubelet[2881]: E0120 15:10:46.398295 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.398462 kubelet[2881]: I0120 15:10:46.398323 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c4e14075-1569-42bc-b38f-776a269a4fcd-registration-dir\") pod \"csi-node-driver-jr4nz\" (UID: \"c4e14075-1569-42bc-b38f-776a269a4fcd\") " pod="calico-system/csi-node-driver-jr4nz" Jan 20 15:10:46.400691 kubelet[2881]: E0120 15:10:46.400563 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.401277 kubelet[2881]: W0120 15:10:46.400743 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.401277 kubelet[2881]: E0120 15:10:46.400972 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.401277 kubelet[2881]: I0120 15:10:46.401005 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c4e14075-1569-42bc-b38f-776a269a4fcd-socket-dir\") pod \"csi-node-driver-jr4nz\" (UID: \"c4e14075-1569-42bc-b38f-776a269a4fcd\") " pod="calico-system/csi-node-driver-jr4nz" Jan 20 15:10:46.406049 kubelet[2881]: E0120 15:10:46.404065 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.406049 kubelet[2881]: W0120 15:10:46.404082 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.406049 kubelet[2881]: E0120 15:10:46.405060 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.406049 kubelet[2881]: E0120 15:10:46.405632 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.406049 kubelet[2881]: W0120 15:10:46.405643 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.406279 kubelet[2881]: E0120 15:10:46.406236 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.406937 kubelet[2881]: E0120 15:10:46.406794 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.407910 kubelet[2881]: W0120 15:10:46.407495 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.407910 kubelet[2881]: E0120 15:10:46.407746 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.407910 kubelet[2881]: I0120 15:10:46.407768 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c4e14075-1569-42bc-b38f-776a269a4fcd-varrun\") pod \"csi-node-driver-jr4nz\" (UID: \"c4e14075-1569-42bc-b38f-776a269a4fcd\") " pod="calico-system/csi-node-driver-jr4nz" Jan 20 15:10:46.409155 kubelet[2881]: E0120 15:10:46.408813 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.409155 kubelet[2881]: W0120 15:10:46.409055 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.410128 kubelet[2881]: E0120 15:10:46.410094 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.413207 kubelet[2881]: E0120 15:10:46.413025 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.413207 kubelet[2881]: W0120 15:10:46.413038 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.413207 kubelet[2881]: E0120 15:10:46.413049 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.414789 kubelet[2881]: E0120 15:10:46.414753 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.414789 kubelet[2881]: W0120 15:10:46.414769 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.416070 kubelet[2881]: E0120 15:10:46.415470 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.416363 containerd[1658]: time="2026-01-20T15:10:46.416241554Z" level=info msg="connecting to shim 3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2" address="unix:///run/containerd/s/bb50a209c4e36824973e662c0ae1c5f4eb325febc11366cf2a5a0cf4ab7ac8b5" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:10:46.417157 kubelet[2881]: E0120 15:10:46.416555 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.417328 kubelet[2881]: W0120 15:10:46.417237 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.417328 kubelet[2881]: E0120 15:10:46.417258 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.420135 kubelet[2881]: E0120 15:10:46.418834 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.420135 kubelet[2881]: W0120 15:10:46.419169 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.420135 kubelet[2881]: E0120 15:10:46.419969 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.420821 kubelet[2881]: E0120 15:10:46.420555 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.421396 kubelet[2881]: W0120 15:10:46.421251 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.421396 kubelet[2881]: E0120 15:10:46.421362 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.421767 kubelet[2881]: E0120 15:10:46.421749 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.421943 kubelet[2881]: W0120 15:10:46.421825 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.422022 kubelet[2881]: E0120 15:10:46.422008 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.422420 kubelet[2881]: E0120 15:10:46.422404 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.422496 kubelet[2881]: W0120 15:10:46.422479 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.422586 kubelet[2881]: E0120 15:10:46.422568 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.423768 kubelet[2881]: E0120 15:10:46.423491 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.423768 kubelet[2881]: W0120 15:10:46.423529 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.423768 kubelet[2881]: E0120 15:10:46.423540 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.425240 kubelet[2881]: E0120 15:10:46.425063 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.425240 kubelet[2881]: W0120 15:10:46.425125 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.425240 kubelet[2881]: E0120 15:10:46.425138 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.424000 audit: BPF prog-id=155 op=LOAD Jan 20 15:10:46.426000 audit: BPF prog-id=156 op=LOAD Jan 20 15:10:46.427463 kubelet[2881]: E0120 15:10:46.427057 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.427463 kubelet[2881]: W0120 15:10:46.427070 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.427463 kubelet[2881]: E0120 15:10:46.427081 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.426000 audit[3336]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3316 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613134396230633638366163653232616363333232633738366261 Jan 20 15:10:46.427000 audit: BPF prog-id=156 op=UNLOAD Jan 20 15:10:46.427000 audit[3336]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3316 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613134396230633638366163653232616363333232633738366261 Jan 20 15:10:46.427000 audit: BPF prog-id=157 op=LOAD Jan 20 15:10:46.427000 audit[3336]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3316 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613134396230633638366163653232616363333232633738366261 Jan 20 15:10:46.427000 audit: BPF prog-id=158 op=LOAD Jan 20 15:10:46.427000 audit[3336]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3316 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613134396230633638366163653232616363333232633738366261 Jan 20 15:10:46.427000 audit: BPF prog-id=158 op=UNLOAD Jan 20 15:10:46.427000 audit[3336]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3316 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613134396230633638366163653232616363333232633738366261 Jan 20 15:10:46.427000 audit: BPF prog-id=157 op=UNLOAD Jan 20 15:10:46.427000 audit[3336]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3316 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613134396230633638366163653232616363333232633738366261 Jan 20 15:10:46.433000 audit: BPF prog-id=159 op=LOAD Jan 20 15:10:46.433000 audit[3336]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3316 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437613134396230633638366163653232616363333232633738366261 Jan 20 15:10:46.498445 systemd[1]: Started cri-containerd-3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2.scope - libcontainer container 3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2. Jan 20 15:10:46.512166 containerd[1658]: time="2026-01-20T15:10:46.512077177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d4c556976-j6zgk,Uid:402a28a4-562b-4205-b590-fb255e441659,Namespace:calico-system,Attempt:0,} returns sandbox id \"47a149b0c686ace22acc322c786ba0020a95d3688ffdc7d9915834a93a32e5fc\"" Jan 20 15:10:46.517336 kubelet[2881]: E0120 15:10:46.517296 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:46.521030 containerd[1658]: time="2026-01-20T15:10:46.520975795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 15:10:46.523859 kubelet[2881]: E0120 15:10:46.523825 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.524117 kubelet[2881]: W0120 15:10:46.523987 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.524317 kubelet[2881]: E0120 15:10:46.524293 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.525295 kubelet[2881]: E0120 15:10:46.525276 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.525382 kubelet[2881]: W0120 15:10:46.525366 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.525513 kubelet[2881]: E0120 15:10:46.525496 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.526456 kubelet[2881]: E0120 15:10:46.526437 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.526588 kubelet[2881]: W0120 15:10:46.526569 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.526955 kubelet[2881]: E0120 15:10:46.526933 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.529137 kubelet[2881]: E0120 15:10:46.528941 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.529137 kubelet[2881]: W0120 15:10:46.528955 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.529440 kubelet[2881]: E0120 15:10:46.529297 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.529829 kubelet[2881]: E0120 15:10:46.529818 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.530343 kubelet[2881]: W0120 15:10:46.530034 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.530343 kubelet[2881]: E0120 15:10:46.530182 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.532279 kubelet[2881]: E0120 15:10:46.532138 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.532279 kubelet[2881]: W0120 15:10:46.532152 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.533349 kubelet[2881]: E0120 15:10:46.533214 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.538263 kubelet[2881]: E0120 15:10:46.538248 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.538784 kubelet[2881]: W0120 15:10:46.538600 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.539289 kubelet[2881]: E0120 15:10:46.539273 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.540635 kubelet[2881]: E0120 15:10:46.540482 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.540635 kubelet[2881]: W0120 15:10:46.540573 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.540000 audit: BPF prog-id=160 op=LOAD Jan 20 15:10:46.541414 kubelet[2881]: E0120 15:10:46.541296 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.542149 kubelet[2881]: E0120 15:10:46.541994 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.542149 kubelet[2881]: W0120 15:10:46.542009 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.543349 kubelet[2881]: E0120 15:10:46.543329 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.544305 kubelet[2881]: E0120 15:10:46.544146 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.544626 kubelet[2881]: W0120 15:10:46.544564 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.545172 kernel: kauditd_printk_skb: 53 callbacks suppressed Jan 20 15:10:46.545479 kernel: audit: type=1334 audit(1768921846.540:554): prog-id=160 op=LOAD Jan 20 15:10:46.546304 kubelet[2881]: E0120 15:10:46.546285 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.548080 kubelet[2881]: E0120 15:10:46.548065 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.548518 kubelet[2881]: W0120 15:10:46.548223 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.540000 audit: BPF prog-id=161 op=LOAD Jan 20 15:10:46.550044 kubelet[2881]: E0120 15:10:46.548819 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.551416 kubelet[2881]: E0120 15:10:46.551396 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.551512 kubelet[2881]: W0120 15:10:46.551495 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.551828 kubelet[2881]: E0120 15:10:46.551815 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.552059 kubelet[2881]: W0120 15:10:46.552043 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.552346 kubelet[2881]: E0120 15:10:46.552215 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.552646 kubelet[2881]: E0120 15:10:46.552631 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.552991 kubelet[2881]: W0120 15:10:46.552780 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.553243 kubelet[2881]: E0120 15:10:46.553228 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.553361 kubelet[2881]: E0120 15:10:46.552074 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.553361 kubelet[2881]: W0120 15:10:46.553299 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.553361 kubelet[2881]: E0120 15:10:46.553307 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.553361 kubelet[2881]: E0120 15:10:46.553507 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.554027 kubelet[2881]: E0120 15:10:46.553975 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.554148 kubelet[2881]: W0120 15:10:46.554026 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.554148 kubelet[2881]: E0120 15:10:46.554095 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.554280 kernel: audit: type=1334 audit(1768921846.540:555): prog-id=161 op=LOAD Jan 20 15:10:46.554449 kubelet[2881]: E0120 15:10:46.554386 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.554553 kubelet[2881]: W0120 15:10:46.554433 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.555033 kubelet[2881]: E0120 15:10:46.554835 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.555837 kubelet[2881]: E0120 15:10:46.555742 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.555837 kubelet[2881]: W0120 15:10:46.555790 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.555984 kubelet[2881]: E0120 15:10:46.555943 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.540000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.556654 kubelet[2881]: E0120 15:10:46.556571 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.556654 kubelet[2881]: W0120 15:10:46.556583 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.556654 kubelet[2881]: E0120 15:10:46.556595 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.557094 kubelet[2881]: E0120 15:10:46.556983 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.557094 kubelet[2881]: W0120 15:10:46.557040 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.557094 kubelet[2881]: E0120 15:10:46.557054 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.575004 kernel: audit: type=1300 audit(1768921846.540:555): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.540000 audit: BPF prog-id=161 op=UNLOAD Jan 20 15:10:46.593086 kernel: audit: type=1327 audit(1768921846.540:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.594960 kernel: audit: type=1334 audit(1768921846.540:556): prog-id=161 op=UNLOAD Jan 20 15:10:46.595001 kernel: audit: type=1300 audit(1768921846.540:556): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit[3416]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.611075 kubelet[2881]: E0120 15:10:46.610827 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:46.611075 kubelet[2881]: W0120 15:10:46.610983 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:46.611163 kubelet[2881]: E0120 15:10:46.611134 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:46.618247 kernel: audit: type=1327 audit(1768921846.540:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.540000 audit: BPF prog-id=162 op=LOAD Jan 20 15:10:46.622154 kernel: audit: type=1334 audit(1768921846.540:557): prog-id=162 op=LOAD Jan 20 15:10:46.540000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.624448 containerd[1658]: time="2026-01-20T15:10:46.624227937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vkjpc,Uid:e8090ca6-1f98-4a25-af56-52533a3f0a29,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2\"" Jan 20 15:10:46.625698 kubelet[2881]: E0120 15:10:46.625558 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:46.637085 kernel: audit: type=1300 audit(1768921846.540:557): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.649078 kernel: audit: type=1327 audit(1768921846.540:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.540000 audit: BPF prog-id=163 op=LOAD Jan 20 15:10:46.540000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.540000 audit: BPF prog-id=163 op=UNLOAD Jan 20 15:10:46.540000 audit[3416]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.540000 audit: BPF prog-id=162 op=UNLOAD Jan 20 15:10:46.540000 audit[3416]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:46.540000 audit: BPF prog-id=164 op=LOAD Jan 20 15:10:46.540000 audit[3416]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=3392 pid=3416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:46.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613461323635643433643038386466663466393834666230313462 Jan 20 15:10:47.234956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681624533.mount: Deactivated successfully. Jan 20 15:10:47.632198 kubelet[2881]: E0120 15:10:47.631756 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:10:48.100492 containerd[1658]: time="2026-01-20T15:10:48.100358811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:48.103022 containerd[1658]: time="2026-01-20T15:10:48.101586562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 20 15:10:48.104363 containerd[1658]: time="2026-01-20T15:10:48.104276901Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:48.107007 containerd[1658]: time="2026-01-20T15:10:48.106917465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:48.107975 containerd[1658]: time="2026-01-20T15:10:48.107918177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.586826075s" Jan 20 15:10:48.108027 containerd[1658]: time="2026-01-20T15:10:48.107998507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 15:10:48.110339 containerd[1658]: time="2026-01-20T15:10:48.110246827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 15:10:48.126118 containerd[1658]: time="2026-01-20T15:10:48.126006429Z" level=info msg="CreateContainer within sandbox \"47a149b0c686ace22acc322c786ba0020a95d3688ffdc7d9915834a93a32e5fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 15:10:48.139418 containerd[1658]: time="2026-01-20T15:10:48.139266433Z" level=info msg="Container ea958b7a1a12b06f9633fdb4840b8535d63f674bff5390abd29df427528e8770: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:48.152095 containerd[1658]: time="2026-01-20T15:10:48.152016579Z" level=info msg="CreateContainer within sandbox \"47a149b0c686ace22acc322c786ba0020a95d3688ffdc7d9915834a93a32e5fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea958b7a1a12b06f9633fdb4840b8535d63f674bff5390abd29df427528e8770\"" Jan 20 15:10:48.153641 containerd[1658]: time="2026-01-20T15:10:48.153538213Z" level=info msg="StartContainer for \"ea958b7a1a12b06f9633fdb4840b8535d63f674bff5390abd29df427528e8770\"" Jan 20 15:10:48.156585 containerd[1658]: time="2026-01-20T15:10:48.156137220Z" level=info msg="connecting to shim ea958b7a1a12b06f9633fdb4840b8535d63f674bff5390abd29df427528e8770" address="unix:///run/containerd/s/1325026c2ffad42a76c1409adf61dfcb63e4707833e10bc77f26da717b91baa4" protocol=ttrpc version=3 Jan 20 15:10:48.192168 systemd[1]: Started cri-containerd-ea958b7a1a12b06f9633fdb4840b8535d63f674bff5390abd29df427528e8770.scope - libcontainer container ea958b7a1a12b06f9633fdb4840b8535d63f674bff5390abd29df427528e8770. Jan 20 15:10:48.237000 audit: BPF prog-id=165 op=LOAD Jan 20 15:10:48.238000 audit: BPF prog-id=166 op=LOAD Jan 20 15:10:48.238000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3316 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:48.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393538623761316131326230366639363333666462343834306238 Jan 20 15:10:48.238000 audit: BPF prog-id=166 op=UNLOAD Jan 20 15:10:48.238000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3316 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:48.238000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393538623761316131326230366639363333666462343834306238 Jan 20 15:10:48.239000 audit: BPF prog-id=167 op=LOAD Jan 20 15:10:48.239000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3316 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393538623761316131326230366639363333666462343834306238 Jan 20 15:10:48.239000 audit: BPF prog-id=168 op=LOAD Jan 20 15:10:48.239000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3316 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393538623761316131326230366639363333666462343834306238 Jan 20 15:10:48.239000 audit: BPF prog-id=168 op=UNLOAD Jan 20 15:10:48.239000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3316 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393538623761316131326230366639363333666462343834306238 Jan 20 15:10:48.239000 audit: BPF prog-id=167 op=UNLOAD Jan 20 15:10:48.239000 audit[3482]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3316 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393538623761316131326230366639363333666462343834306238 Jan 20 15:10:48.239000 audit: BPF prog-id=169 op=LOAD Jan 20 15:10:48.239000 audit[3482]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3316 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:48.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561393538623761316131326230366639363333666462343834306238 Jan 20 15:10:48.300981 containerd[1658]: time="2026-01-20T15:10:48.300715695Z" level=info msg="StartContainer for \"ea958b7a1a12b06f9633fdb4840b8535d63f674bff5390abd29df427528e8770\" returns successfully" Jan 20 15:10:48.771488 kubelet[2881]: E0120 15:10:48.771403 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:48.862548 kubelet[2881]: E0120 15:10:48.862440 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.862548 kubelet[2881]: W0120 15:10:48.862481 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.862548 kubelet[2881]: E0120 15:10:48.862502 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.862835 kubelet[2881]: E0120 15:10:48.862788 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.862835 kubelet[2881]: W0120 15:10:48.862798 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.862835 kubelet[2881]: E0120 15:10:48.862809 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.863387 kubelet[2881]: E0120 15:10:48.863350 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.863387 kubelet[2881]: W0120 15:10:48.863360 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.863387 kubelet[2881]: E0120 15:10:48.863370 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.864053 kubelet[2881]: E0120 15:10:48.863944 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.864053 kubelet[2881]: W0120 15:10:48.864014 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.864053 kubelet[2881]: E0120 15:10:48.864046 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.864487 kubelet[2881]: E0120 15:10:48.864439 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.864543 kubelet[2881]: W0120 15:10:48.864490 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.864543 kubelet[2881]: E0120 15:10:48.864509 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.865223 kubelet[2881]: E0120 15:10:48.865114 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.865223 kubelet[2881]: W0120 15:10:48.865161 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.865223 kubelet[2881]: E0120 15:10:48.865182 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.865622 kubelet[2881]: E0120 15:10:48.865576 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.865720 kubelet[2881]: W0120 15:10:48.865624 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.865720 kubelet[2881]: E0120 15:10:48.865638 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.866194 kubelet[2881]: E0120 15:10:48.866148 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.866254 kubelet[2881]: W0120 15:10:48.866200 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.866254 kubelet[2881]: E0120 15:10:48.866213 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.866821 kubelet[2881]: E0120 15:10:48.866758 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.866821 kubelet[2881]: W0120 15:10:48.866805 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.866821 kubelet[2881]: E0120 15:10:48.866818 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.867545 kubelet[2881]: E0120 15:10:48.867380 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.867545 kubelet[2881]: W0120 15:10:48.867428 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.867545 kubelet[2881]: E0120 15:10:48.867442 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.868086 kubelet[2881]: E0120 15:10:48.868037 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.868086 kubelet[2881]: W0120 15:10:48.868084 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.868178 kubelet[2881]: E0120 15:10:48.868096 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.868526 kubelet[2881]: E0120 15:10:48.868467 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.868526 kubelet[2881]: W0120 15:10:48.868516 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.868614 kubelet[2881]: E0120 15:10:48.868531 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.869243 kubelet[2881]: E0120 15:10:48.869170 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.869305 kubelet[2881]: W0120 15:10:48.869276 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.869305 kubelet[2881]: E0120 15:10:48.869290 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.869780 kubelet[2881]: E0120 15:10:48.869712 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.869780 kubelet[2881]: W0120 15:10:48.869763 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.869780 kubelet[2881]: E0120 15:10:48.869775 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.870334 kubelet[2881]: E0120 15:10:48.870270 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.870334 kubelet[2881]: W0120 15:10:48.870323 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.870423 kubelet[2881]: E0120 15:10:48.870339 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.962986 kubelet[2881]: E0120 15:10:48.962931 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.962986 kubelet[2881]: W0120 15:10:48.962974 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.962986 kubelet[2881]: E0120 15:10:48.962995 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.963512 kubelet[2881]: E0120 15:10:48.963414 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.963512 kubelet[2881]: W0120 15:10:48.963476 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.963575 kubelet[2881]: E0120 15:10:48.963534 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.964240 kubelet[2881]: E0120 15:10:48.964122 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.964240 kubelet[2881]: W0120 15:10:48.964154 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.964240 kubelet[2881]: E0120 15:10:48.964190 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.964631 kubelet[2881]: E0120 15:10:48.964595 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.964925 kubelet[2881]: W0120 15:10:48.964818 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.964987 kubelet[2881]: E0120 15:10:48.964940 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.965459 kubelet[2881]: E0120 15:10:48.965400 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.965459 kubelet[2881]: W0120 15:10:48.965449 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.965568 kubelet[2881]: E0120 15:10:48.965513 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.966146 kubelet[2881]: E0120 15:10:48.966089 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.966146 kubelet[2881]: W0120 15:10:48.966138 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.966271 kubelet[2881]: E0120 15:10:48.966234 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.966558 kubelet[2881]: E0120 15:10:48.966522 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.966597 kubelet[2881]: W0120 15:10:48.966561 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.966711 kubelet[2881]: E0120 15:10:48.966656 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.967198 kubelet[2881]: E0120 15:10:48.967157 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.967198 kubelet[2881]: W0120 15:10:48.967198 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.967394 kubelet[2881]: E0120 15:10:48.967347 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.967814 kubelet[2881]: E0120 15:10:48.967764 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.967931 kubelet[2881]: W0120 15:10:48.967818 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.968151 kubelet[2881]: E0120 15:10:48.967973 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.968469 kubelet[2881]: E0120 15:10:48.968393 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.968469 kubelet[2881]: W0120 15:10:48.968450 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.968751 kubelet[2881]: E0120 15:10:48.968643 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.969023 kubelet[2881]: E0120 15:10:48.968985 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.969023 kubelet[2881]: W0120 15:10:48.969021 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.969125 kubelet[2881]: E0120 15:10:48.969112 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.969378 kubelet[2881]: E0120 15:10:48.969328 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.969378 kubelet[2881]: W0120 15:10:48.969371 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.969462 kubelet[2881]: E0120 15:10:48.969428 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.970128 kubelet[2881]: E0120 15:10:48.969790 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.970128 kubelet[2881]: W0120 15:10:48.969899 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.970128 kubelet[2881]: E0120 15:10:48.969938 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.970340 kubelet[2881]: E0120 15:10:48.970296 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.970340 kubelet[2881]: W0120 15:10:48.970326 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.970397 kubelet[2881]: E0120 15:10:48.970377 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.971001 kubelet[2881]: E0120 15:10:48.970834 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.971001 kubelet[2881]: W0120 15:10:48.970933 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.971001 kubelet[2881]: E0120 15:10:48.970977 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.971414 kubelet[2881]: E0120 15:10:48.971354 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.971414 kubelet[2881]: W0120 15:10:48.971387 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.971485 kubelet[2881]: E0120 15:10:48.971419 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.972120 kubelet[2881]: E0120 15:10:48.972061 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.972120 kubelet[2881]: W0120 15:10:48.972097 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.972199 kubelet[2881]: E0120 15:10:48.972138 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:48.972532 kubelet[2881]: E0120 15:10:48.972458 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 15:10:48.972532 kubelet[2881]: W0120 15:10:48.972495 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 15:10:48.972532 kubelet[2881]: E0120 15:10:48.972505 2881 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 15:10:49.197761 containerd[1658]: time="2026-01-20T15:10:49.197509286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:49.199822 containerd[1658]: time="2026-01-20T15:10:49.199489385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 15:10:49.202430 containerd[1658]: time="2026-01-20T15:10:49.202278967Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:49.207621 containerd[1658]: time="2026-01-20T15:10:49.207197236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:49.209455 containerd[1658]: time="2026-01-20T15:10:49.209322263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.099042855s" Jan 20 15:10:49.209455 containerd[1658]: time="2026-01-20T15:10:49.209402012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 15:10:49.213718 containerd[1658]: time="2026-01-20T15:10:49.213638711Z" level=info msg="CreateContainer within sandbox \"3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 15:10:49.228400 containerd[1658]: time="2026-01-20T15:10:49.228321589Z" level=info msg="Container c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:49.244627 containerd[1658]: time="2026-01-20T15:10:49.244586220Z" level=info msg="CreateContainer within sandbox \"3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150\"" Jan 20 15:10:49.245787 containerd[1658]: time="2026-01-20T15:10:49.245591004Z" level=info msg="StartContainer for \"c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150\"" Jan 20 15:10:49.248755 containerd[1658]: time="2026-01-20T15:10:49.248719522Z" level=info msg="connecting to shim c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150" address="unix:///run/containerd/s/bb50a209c4e36824973e662c0ae1c5f4eb325febc11366cf2a5a0cf4ab7ac8b5" protocol=ttrpc version=3 Jan 20 15:10:49.307280 systemd[1]: Started cri-containerd-c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150.scope - libcontainer container c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150. Jan 20 15:10:49.423000 audit: BPF prog-id=170 op=LOAD Jan 20 15:10:49.423000 audit[3559]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3392 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:49.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336306438623938316362316461316663303166326131613362363937 Jan 20 15:10:49.423000 audit: BPF prog-id=171 op=LOAD Jan 20 15:10:49.423000 audit[3559]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3392 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:49.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336306438623938316362316461316663303166326131613362363937 Jan 20 15:10:49.423000 audit: BPF prog-id=171 op=UNLOAD Jan 20 15:10:49.423000 audit[3559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:49.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336306438623938316362316461316663303166326131613362363937 Jan 20 15:10:49.423000 audit: BPF prog-id=170 op=UNLOAD Jan 20 15:10:49.423000 audit[3559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:49.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336306438623938316362316461316663303166326131613362363937 Jan 20 15:10:49.423000 audit: BPF prog-id=172 op=LOAD Jan 20 15:10:49.423000 audit[3559]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3392 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:49.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336306438623938316362316461316663303166326131613362363937 Jan 20 15:10:49.480070 containerd[1658]: time="2026-01-20T15:10:49.478454739Z" level=info msg="StartContainer for \"c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150\" returns successfully" Jan 20 15:10:49.511724 systemd[1]: cri-containerd-c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150.scope: Deactivated successfully. Jan 20 15:10:49.514947 containerd[1658]: time="2026-01-20T15:10:49.514659723Z" level=info msg="received container exit event container_id:\"c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150\" id:\"c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150\" pid:3573 exited_at:{seconds:1768921849 nanos:513728093}" Jan 20 15:10:49.519000 audit: BPF prog-id=172 op=UNLOAD Jan 20 15:10:49.569378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c60d8b981cb1da1fc01f2a1a3b6974aedccc47823a2560872c070b6d383b3150-rootfs.mount: Deactivated successfully. Jan 20 15:10:49.632388 kubelet[2881]: E0120 15:10:49.632171 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:10:49.777581 kubelet[2881]: I0120 15:10:49.777521 2881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 15:10:49.778562 kubelet[2881]: E0120 15:10:49.778404 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:49.794561 kubelet[2881]: E0120 15:10:49.794087 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:49.798640 containerd[1658]: time="2026-01-20T15:10:49.798597467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 15:10:49.838768 kubelet[2881]: I0120 15:10:49.838549 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d4c556976-j6zgk" podStartSLOduration=3.247644788 podStartE2EDuration="4.838523572s" podCreationTimestamp="2026-01-20 15:10:45 +0000 UTC" firstStartedPulling="2026-01-20 15:10:46.519130207 +0000 UTC m=+20.047560090" lastFinishedPulling="2026-01-20 15:10:48.110009001 +0000 UTC m=+21.638438874" observedRunningTime="2026-01-20 15:10:48.800517674 +0000 UTC m=+22.328947557" watchObservedRunningTime="2026-01-20 15:10:49.838523572 +0000 UTC m=+23.366953445" Jan 20 15:10:51.631550 kubelet[2881]: E0120 15:10:51.631391 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:10:52.423339 containerd[1658]: time="2026-01-20T15:10:52.423089243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:52.426201 containerd[1658]: time="2026-01-20T15:10:52.424307521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 20 15:10:52.426252 containerd[1658]: time="2026-01-20T15:10:52.426207390Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:52.430496 containerd[1658]: time="2026-01-20T15:10:52.430044886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:10:52.431092 containerd[1658]: time="2026-01-20T15:10:52.431004686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.632357225s" Jan 20 15:10:52.431236 containerd[1658]: time="2026-01-20T15:10:52.431129851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 15:10:52.435222 containerd[1658]: time="2026-01-20T15:10:52.435105641Z" level=info msg="CreateContainer within sandbox \"3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 15:10:52.456582 containerd[1658]: time="2026-01-20T15:10:52.456455119Z" level=info msg="Container 23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:10:52.477741 containerd[1658]: time="2026-01-20T15:10:52.477154884Z" level=info msg="CreateContainer within sandbox \"3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce\"" Jan 20 15:10:52.480453 containerd[1658]: time="2026-01-20T15:10:52.480335385Z" level=info msg="StartContainer for \"23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce\"" Jan 20 15:10:52.484244 containerd[1658]: time="2026-01-20T15:10:52.484083538Z" level=info msg="connecting to shim 23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce" address="unix:///run/containerd/s/bb50a209c4e36824973e662c0ae1c5f4eb325febc11366cf2a5a0cf4ab7ac8b5" protocol=ttrpc version=3 Jan 20 15:10:52.555428 systemd[1]: Started cri-containerd-23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce.scope - libcontainer container 23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce. Jan 20 15:10:52.667000 audit: BPF prog-id=173 op=LOAD Jan 20 15:10:52.671968 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 20 15:10:52.672062 kernel: audit: type=1334 audit(1768921852.667:576): prog-id=173 op=LOAD Jan 20 15:10:52.667000 audit[3621]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.685185 kernel: audit: type=1300 audit(1768921852.667:576): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.686392 kernel: audit: type=1327 audit(1768921852.667:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.667000 audit: BPF prog-id=174 op=LOAD Jan 20 15:10:52.699331 kernel: audit: type=1334 audit(1768921852.667:577): prog-id=174 op=LOAD Jan 20 15:10:52.667000 audit[3621]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.711916 kernel: audit: type=1300 audit(1768921852.667:577): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.667000 audit: BPF prog-id=174 op=UNLOAD Jan 20 15:10:52.729439 kernel: audit: type=1327 audit(1768921852.667:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.729500 kernel: audit: type=1334 audit(1768921852.667:578): prog-id=174 op=UNLOAD Jan 20 15:10:52.667000 audit[3621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.747560 kernel: audit: type=1300 audit(1768921852.667:578): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.759942 containerd[1658]: time="2026-01-20T15:10:52.759804444Z" level=info msg="StartContainer for \"23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce\" returns successfully" Jan 20 15:10:52.771260 kernel: audit: type=1327 audit(1768921852.667:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.668000 audit: BPF prog-id=173 op=UNLOAD Jan 20 15:10:52.776661 kernel: audit: type=1334 audit(1768921852.668:579): prog-id=173 op=UNLOAD Jan 20 15:10:52.668000 audit[3621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.668000 audit: BPF prog-id=175 op=LOAD Jan 20 15:10:52.668000 audit[3621]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3392 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:52.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233623530356463633263613537336432373461373638373163633566 Jan 20 15:10:52.796914 kubelet[2881]: E0120 15:10:52.796785 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:53.632530 kubelet[2881]: E0120 15:10:53.632351 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:10:53.799806 kubelet[2881]: E0120 15:10:53.799416 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:54.149834 systemd[1]: cri-containerd-23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce.scope: Deactivated successfully. Jan 20 15:10:54.150959 systemd[1]: cri-containerd-23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce.scope: Consumed 1.477s CPU time, 176M memory peak, 3.3M read from disk, 171.3M written to disk. Jan 20 15:10:54.152598 containerd[1658]: time="2026-01-20T15:10:54.152433366Z" level=info msg="received container exit event container_id:\"23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce\" id:\"23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce\" pid:3634 exited_at:{seconds:1768921854 nanos:151922465}" Jan 20 15:10:54.154000 audit: BPF prog-id=175 op=UNLOAD Jan 20 15:10:54.194628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23b505dcc2ca573d274a76871cc5f32368dd0cdda1069ea0b8a5e545acb5c7ce-rootfs.mount: Deactivated successfully. Jan 20 15:10:54.207940 kubelet[2881]: I0120 15:10:54.206537 2881 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 15:10:54.274230 systemd[1]: Created slice kubepods-besteffort-pod859e273f_5e38_4fb4_ada4_56bd2806f5ed.slice - libcontainer container kubepods-besteffort-pod859e273f_5e38_4fb4_ada4_56bd2806f5ed.slice. Jan 20 15:10:54.303240 systemd[1]: Created slice kubepods-besteffort-pod26e4b1e4_d471_4e91_bbfc_9aa64bff08f3.slice - libcontainer container kubepods-besteffort-pod26e4b1e4_d471_4e91_bbfc_9aa64bff08f3.slice. Jan 20 15:10:54.316666 systemd[1]: Created slice kubepods-burstable-pod62a9c2f3_dfd8_41e2_bf5d_65b847256fb1.slice - libcontainer container kubepods-burstable-pod62a9c2f3_dfd8_41e2_bf5d_65b847256fb1.slice. Jan 20 15:10:54.330597 systemd[1]: Created slice kubepods-burstable-podeaf6daf4_3dcb_4cb3_bf6c_352fdf3b26d3.slice - libcontainer container kubepods-burstable-podeaf6daf4_3dcb_4cb3_bf6c_352fdf3b26d3.slice. Jan 20 15:10:54.339394 kubelet[2881]: I0120 15:10:54.339338 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxgq\" (UniqueName: \"kubernetes.io/projected/eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3-kube-api-access-bdxgq\") pod \"coredns-668d6bf9bc-rqlpn\" (UID: \"eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3\") " pod="kube-system/coredns-668d6bf9bc-rqlpn" Jan 20 15:10:54.339394 kubelet[2881]: I0120 15:10:54.339374 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/746e4480-dd89-4ee6-ba05-3e214024a83b-tigera-ca-bundle\") pod \"calico-kube-controllers-6bf79bffbc-qt64q\" (UID: \"746e4480-dd89-4ee6-ba05-3e214024a83b\") " pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" Jan 20 15:10:54.339394 kubelet[2881]: I0120 15:10:54.339394 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa31347-4392-4f2f-a0ac-7346e7069fc9-config\") pod \"goldmane-666569f655-lmlng\" (UID: \"5fa31347-4392-4f2f-a0ac-7346e7069fc9\") " pod="calico-system/goldmane-666569f655-lmlng" Jan 20 15:10:54.339590 kubelet[2881]: I0120 15:10:54.339411 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mr4\" (UniqueName: \"kubernetes.io/projected/5fa31347-4392-4f2f-a0ac-7346e7069fc9-kube-api-access-f2mr4\") pod \"goldmane-666569f655-lmlng\" (UID: \"5fa31347-4392-4f2f-a0ac-7346e7069fc9\") " pod="calico-system/goldmane-666569f655-lmlng" Jan 20 15:10:54.339590 kubelet[2881]: I0120 15:10:54.339425 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztvjm\" (UniqueName: \"kubernetes.io/projected/746e4480-dd89-4ee6-ba05-3e214024a83b-kube-api-access-ztvjm\") pod \"calico-kube-controllers-6bf79bffbc-qt64q\" (UID: \"746e4480-dd89-4ee6-ba05-3e214024a83b\") " pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" Jan 20 15:10:54.339590 kubelet[2881]: I0120 15:10:54.339439 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-backend-key-pair\") pod \"whisker-696ddd7d4f-9q82r\" (UID: \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\") " pod="calico-system/whisker-696ddd7d4f-9q82r" Jan 20 15:10:54.339590 kubelet[2881]: I0120 15:10:54.339458 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-ca-bundle\") pod \"whisker-696ddd7d4f-9q82r\" (UID: \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\") " pod="calico-system/whisker-696ddd7d4f-9q82r" Jan 20 15:10:54.339590 kubelet[2881]: I0120 15:10:54.339475 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5b2bf4f6-7ff7-4a4a-b602-713112aeec36-calico-apiserver-certs\") pod \"calico-apiserver-6848f96b7-bggk4\" (UID: \"5b2bf4f6-7ff7-4a4a-b602-713112aeec36\") " pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" Jan 20 15:10:54.340213 kubelet[2881]: I0120 15:10:54.339488 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fa31347-4392-4f2f-a0ac-7346e7069fc9-goldmane-ca-bundle\") pod \"goldmane-666569f655-lmlng\" (UID: \"5fa31347-4392-4f2f-a0ac-7346e7069fc9\") " pod="calico-system/goldmane-666569f655-lmlng" Jan 20 15:10:54.340213 kubelet[2881]: I0120 15:10:54.339500 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5fa31347-4392-4f2f-a0ac-7346e7069fc9-goldmane-key-pair\") pod \"goldmane-666569f655-lmlng\" (UID: \"5fa31347-4392-4f2f-a0ac-7346e7069fc9\") " pod="calico-system/goldmane-666569f655-lmlng" Jan 20 15:10:54.340213 kubelet[2881]: I0120 15:10:54.339515 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26e4b1e4-d471-4e91-bbfc-9aa64bff08f3-calico-apiserver-certs\") pod \"calico-apiserver-6848f96b7-l6nzr\" (UID: \"26e4b1e4-d471-4e91-bbfc-9aa64bff08f3\") " pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" Jan 20 15:10:54.340213 kubelet[2881]: I0120 15:10:54.339531 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62a9c2f3-dfd8-41e2-bf5d-65b847256fb1-config-volume\") pod \"coredns-668d6bf9bc-7s8xj\" (UID: \"62a9c2f3-dfd8-41e2-bf5d-65b847256fb1\") " pod="kube-system/coredns-668d6bf9bc-7s8xj" Jan 20 15:10:54.340213 kubelet[2881]: I0120 15:10:54.339548 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wfq\" (UniqueName: \"kubernetes.io/projected/62a9c2f3-dfd8-41e2-bf5d-65b847256fb1-kube-api-access-96wfq\") pod \"coredns-668d6bf9bc-7s8xj\" (UID: \"62a9c2f3-dfd8-41e2-bf5d-65b847256fb1\") " pod="kube-system/coredns-668d6bf9bc-7s8xj" Jan 20 15:10:54.340465 kubelet[2881]: I0120 15:10:54.339579 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9ccp\" (UniqueName: \"kubernetes.io/projected/5b2bf4f6-7ff7-4a4a-b602-713112aeec36-kube-api-access-n9ccp\") pod \"calico-apiserver-6848f96b7-bggk4\" (UID: \"5b2bf4f6-7ff7-4a4a-b602-713112aeec36\") " pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" Jan 20 15:10:54.340465 kubelet[2881]: I0120 15:10:54.339603 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3-config-volume\") pod \"coredns-668d6bf9bc-rqlpn\" (UID: \"eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3\") " pod="kube-system/coredns-668d6bf9bc-rqlpn" Jan 20 15:10:54.340465 kubelet[2881]: I0120 15:10:54.339616 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqz7d\" (UniqueName: \"kubernetes.io/projected/859e273f-5e38-4fb4-ada4-56bd2806f5ed-kube-api-access-mqz7d\") pod \"whisker-696ddd7d4f-9q82r\" (UID: \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\") " pod="calico-system/whisker-696ddd7d4f-9q82r" Jan 20 15:10:54.340465 kubelet[2881]: I0120 15:10:54.339630 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9hzj\" (UniqueName: \"kubernetes.io/projected/26e4b1e4-d471-4e91-bbfc-9aa64bff08f3-kube-api-access-m9hzj\") pod \"calico-apiserver-6848f96b7-l6nzr\" (UID: \"26e4b1e4-d471-4e91-bbfc-9aa64bff08f3\") " pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" Jan 20 15:10:54.347428 systemd[1]: Created slice kubepods-besteffort-pod5b2bf4f6_7ff7_4a4a_b602_713112aeec36.slice - libcontainer container kubepods-besteffort-pod5b2bf4f6_7ff7_4a4a_b602_713112aeec36.slice. Jan 20 15:10:54.359015 systemd[1]: Created slice kubepods-besteffort-pod746e4480_dd89_4ee6_ba05_3e214024a83b.slice - libcontainer container kubepods-besteffort-pod746e4480_dd89_4ee6_ba05_3e214024a83b.slice. Jan 20 15:10:54.369243 systemd[1]: Created slice kubepods-besteffort-pod5fa31347_4392_4f2f_a0ac_7346e7069fc9.slice - libcontainer container kubepods-besteffort-pod5fa31347_4392_4f2f_a0ac_7346e7069fc9.slice. Jan 20 15:10:54.585913 containerd[1658]: time="2026-01-20T15:10:54.585097670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-696ddd7d4f-9q82r,Uid:859e273f-5e38-4fb4-ada4-56bd2806f5ed,Namespace:calico-system,Attempt:0,}" Jan 20 15:10:54.611751 containerd[1658]: time="2026-01-20T15:10:54.611667565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-l6nzr,Uid:26e4b1e4-d471-4e91-bbfc-9aa64bff08f3,Namespace:calico-apiserver,Attempt:0,}" Jan 20 15:10:54.623273 kubelet[2881]: E0120 15:10:54.623175 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:54.624413 containerd[1658]: time="2026-01-20T15:10:54.624357512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7s8xj,Uid:62a9c2f3-dfd8-41e2-bf5d-65b847256fb1,Namespace:kube-system,Attempt:0,}" Jan 20 15:10:54.640326 kubelet[2881]: E0120 15:10:54.640238 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:54.641736 containerd[1658]: time="2026-01-20T15:10:54.641664322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqlpn,Uid:eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3,Namespace:kube-system,Attempt:0,}" Jan 20 15:10:54.658522 containerd[1658]: time="2026-01-20T15:10:54.658317315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-bggk4,Uid:5b2bf4f6-7ff7-4a4a-b602-713112aeec36,Namespace:calico-apiserver,Attempt:0,}" Jan 20 15:10:54.667912 containerd[1658]: time="2026-01-20T15:10:54.667803936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf79bffbc-qt64q,Uid:746e4480-dd89-4ee6-ba05-3e214024a83b,Namespace:calico-system,Attempt:0,}" Jan 20 15:10:54.674887 containerd[1658]: time="2026-01-20T15:10:54.673728393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmlng,Uid:5fa31347-4392-4f2f-a0ac-7346e7069fc9,Namespace:calico-system,Attempt:0,}" Jan 20 15:10:54.786960 containerd[1658]: time="2026-01-20T15:10:54.786782740Z" level=error msg="Failed to destroy network for sandbox \"5ea843a84ad9b6fd055082104ddc1cdf25a625e80755ddf2e429ece7d484ae22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.793141 containerd[1658]: time="2026-01-20T15:10:54.793099623Z" level=error msg="Failed to destroy network for sandbox \"15d4d2d6385e0fdfc70da1b5ec2310ea7cba1a815f8cee8e8be7e5c80fe0b5c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.793725 containerd[1658]: time="2026-01-20T15:10:54.793608631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-l6nzr,Uid:26e4b1e4-d471-4e91-bbfc-9aa64bff08f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ea843a84ad9b6fd055082104ddc1cdf25a625e80755ddf2e429ece7d484ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.794879 kubelet[2881]: E0120 15:10:54.794758 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ea843a84ad9b6fd055082104ddc1cdf25a625e80755ddf2e429ece7d484ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.794968 kubelet[2881]: E0120 15:10:54.794895 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ea843a84ad9b6fd055082104ddc1cdf25a625e80755ddf2e429ece7d484ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" Jan 20 15:10:54.795142 kubelet[2881]: E0120 15:10:54.795095 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ea843a84ad9b6fd055082104ddc1cdf25a625e80755ddf2e429ece7d484ae22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" Jan 20 15:10:54.795253 kubelet[2881]: E0120 15:10:54.795210 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6848f96b7-l6nzr_calico-apiserver(26e4b1e4-d471-4e91-bbfc-9aa64bff08f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6848f96b7-l6nzr_calico-apiserver(26e4b1e4-d471-4e91-bbfc-9aa64bff08f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ea843a84ad9b6fd055082104ddc1cdf25a625e80755ddf2e429ece7d484ae22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:10:54.803554 containerd[1658]: time="2026-01-20T15:10:54.803478842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-696ddd7d4f-9q82r,Uid:859e273f-5e38-4fb4-ada4-56bd2806f5ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d4d2d6385e0fdfc70da1b5ec2310ea7cba1a815f8cee8e8be7e5c80fe0b5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.804546 kubelet[2881]: E0120 15:10:54.804514 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d4d2d6385e0fdfc70da1b5ec2310ea7cba1a815f8cee8e8be7e5c80fe0b5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.805519 kubelet[2881]: E0120 15:10:54.805013 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d4d2d6385e0fdfc70da1b5ec2310ea7cba1a815f8cee8e8be7e5c80fe0b5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-696ddd7d4f-9q82r" Jan 20 15:10:54.805519 kubelet[2881]: E0120 15:10:54.805045 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d4d2d6385e0fdfc70da1b5ec2310ea7cba1a815f8cee8e8be7e5c80fe0b5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-696ddd7d4f-9q82r" Jan 20 15:10:54.805519 kubelet[2881]: E0120 15:10:54.805119 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-696ddd7d4f-9q82r_calico-system(859e273f-5e38-4fb4-ada4-56bd2806f5ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-696ddd7d4f-9q82r_calico-system(859e273f-5e38-4fb4-ada4-56bd2806f5ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15d4d2d6385e0fdfc70da1b5ec2310ea7cba1a815f8cee8e8be7e5c80fe0b5c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-696ddd7d4f-9q82r" podUID="859e273f-5e38-4fb4-ada4-56bd2806f5ed" Jan 20 15:10:54.815081 kubelet[2881]: E0120 15:10:54.814916 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:54.827340 containerd[1658]: time="2026-01-20T15:10:54.827155035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 15:10:54.849931 containerd[1658]: time="2026-01-20T15:10:54.849777478Z" level=error msg="Failed to destroy network for sandbox \"604ab667b1b36cf30cede617f627dece0043e6aee2a7d6605f15699b9ed7f99a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.859211 containerd[1658]: time="2026-01-20T15:10:54.859167780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqlpn,Uid:eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"604ab667b1b36cf30cede617f627dece0043e6aee2a7d6605f15699b9ed7f99a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.859552 kubelet[2881]: E0120 15:10:54.859438 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604ab667b1b36cf30cede617f627dece0043e6aee2a7d6605f15699b9ed7f99a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.859552 kubelet[2881]: E0120 15:10:54.859526 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604ab667b1b36cf30cede617f627dece0043e6aee2a7d6605f15699b9ed7f99a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rqlpn" Jan 20 15:10:54.859552 kubelet[2881]: E0120 15:10:54.859548 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604ab667b1b36cf30cede617f627dece0043e6aee2a7d6605f15699b9ed7f99a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rqlpn" Jan 20 15:10:54.859754 kubelet[2881]: E0120 15:10:54.859582 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rqlpn_kube-system(eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rqlpn_kube-system(eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"604ab667b1b36cf30cede617f627dece0043e6aee2a7d6605f15699b9ed7f99a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rqlpn" podUID="eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3" Jan 20 15:10:54.861161 containerd[1658]: time="2026-01-20T15:10:54.861125056Z" level=error msg="Failed to destroy network for sandbox \"b640defc732b78b21da88ce3751bfc7b8378ae90165c68f83216f28ab8a32417\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.864338 containerd[1658]: time="2026-01-20T15:10:54.864235675Z" level=error msg="Failed to destroy network for sandbox \"077960f38e0091d6317932ba0b31514e1f5633a5e396eb1258609cab33ad3a85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.865641 containerd[1658]: time="2026-01-20T15:10:54.865610374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-bggk4,Uid:5b2bf4f6-7ff7-4a4a-b602-713112aeec36,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b640defc732b78b21da88ce3751bfc7b8378ae90165c68f83216f28ab8a32417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.866401 kubelet[2881]: E0120 15:10:54.866097 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b640defc732b78b21da88ce3751bfc7b8378ae90165c68f83216f28ab8a32417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.866401 kubelet[2881]: E0120 15:10:54.866138 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b640defc732b78b21da88ce3751bfc7b8378ae90165c68f83216f28ab8a32417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" Jan 20 15:10:54.866401 kubelet[2881]: E0120 15:10:54.866156 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b640defc732b78b21da88ce3751bfc7b8378ae90165c68f83216f28ab8a32417\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" Jan 20 15:10:54.866507 kubelet[2881]: E0120 15:10:54.866240 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6848f96b7-bggk4_calico-apiserver(5b2bf4f6-7ff7-4a4a-b602-713112aeec36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6848f96b7-bggk4_calico-apiserver(5b2bf4f6-7ff7-4a4a-b602-713112aeec36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b640defc732b78b21da88ce3751bfc7b8378ae90165c68f83216f28ab8a32417\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:10:54.868898 containerd[1658]: time="2026-01-20T15:10:54.868800430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7s8xj,Uid:62a9c2f3-dfd8-41e2-bf5d-65b847256fb1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"077960f38e0091d6317932ba0b31514e1f5633a5e396eb1258609cab33ad3a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.869411 kubelet[2881]: E0120 15:10:54.869356 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077960f38e0091d6317932ba0b31514e1f5633a5e396eb1258609cab33ad3a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.869532 kubelet[2881]: E0120 15:10:54.869445 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077960f38e0091d6317932ba0b31514e1f5633a5e396eb1258609cab33ad3a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7s8xj" Jan 20 15:10:54.869532 kubelet[2881]: E0120 15:10:54.869476 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077960f38e0091d6317932ba0b31514e1f5633a5e396eb1258609cab33ad3a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7s8xj" Jan 20 15:10:54.869613 kubelet[2881]: E0120 15:10:54.869523 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7s8xj_kube-system(62a9c2f3-dfd8-41e2-bf5d-65b847256fb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7s8xj_kube-system(62a9c2f3-dfd8-41e2-bf5d-65b847256fb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"077960f38e0091d6317932ba0b31514e1f5633a5e396eb1258609cab33ad3a85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7s8xj" podUID="62a9c2f3-dfd8-41e2-bf5d-65b847256fb1" Jan 20 15:10:54.870515 containerd[1658]: time="2026-01-20T15:10:54.870479560Z" level=error msg="Failed to destroy network for sandbox \"ecfd0056ffe4116c52befa932a5b08b09298996b154d66f69bd7c62398a901cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.871288 containerd[1658]: time="2026-01-20T15:10:54.871211575Z" level=error msg="Failed to destroy network for sandbox \"87719b3421b351e8f02785d70fcfb14fd72a974f64562630dfb0de75eaa3fc77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.878802 containerd[1658]: time="2026-01-20T15:10:54.878743542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf79bffbc-qt64q,Uid:746e4480-dd89-4ee6-ba05-3e214024a83b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecfd0056ffe4116c52befa932a5b08b09298996b154d66f69bd7c62398a901cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.879171 kubelet[2881]: E0120 15:10:54.879125 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecfd0056ffe4116c52befa932a5b08b09298996b154d66f69bd7c62398a901cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.879275 kubelet[2881]: E0120 15:10:54.879198 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecfd0056ffe4116c52befa932a5b08b09298996b154d66f69bd7c62398a901cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" Jan 20 15:10:54.879275 kubelet[2881]: E0120 15:10:54.879223 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecfd0056ffe4116c52befa932a5b08b09298996b154d66f69bd7c62398a901cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" Jan 20 15:10:54.879336 kubelet[2881]: E0120 15:10:54.879268 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bf79bffbc-qt64q_calico-system(746e4480-dd89-4ee6-ba05-3e214024a83b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bf79bffbc-qt64q_calico-system(746e4480-dd89-4ee6-ba05-3e214024a83b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecfd0056ffe4116c52befa932a5b08b09298996b154d66f69bd7c62398a901cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:10:54.881492 containerd[1658]: time="2026-01-20T15:10:54.881413727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmlng,Uid:5fa31347-4392-4f2f-a0ac-7346e7069fc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87719b3421b351e8f02785d70fcfb14fd72a974f64562630dfb0de75eaa3fc77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.881873 kubelet[2881]: E0120 15:10:54.881743 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87719b3421b351e8f02785d70fcfb14fd72a974f64562630dfb0de75eaa3fc77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:54.881873 kubelet[2881]: E0120 15:10:54.881809 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87719b3421b351e8f02785d70fcfb14fd72a974f64562630dfb0de75eaa3fc77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lmlng" Jan 20 15:10:54.882002 kubelet[2881]: E0120 15:10:54.881927 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87719b3421b351e8f02785d70fcfb14fd72a974f64562630dfb0de75eaa3fc77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lmlng" Jan 20 15:10:54.882051 kubelet[2881]: E0120 15:10:54.881999 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lmlng_calico-system(5fa31347-4392-4f2f-a0ac-7346e7069fc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lmlng_calico-system(5fa31347-4392-4f2f-a0ac-7346e7069fc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87719b3421b351e8f02785d70fcfb14fd72a974f64562630dfb0de75eaa3fc77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:10:55.638886 systemd[1]: Created slice kubepods-besteffort-podc4e14075_1569_42bc_b38f_776a269a4fcd.slice - libcontainer container kubepods-besteffort-podc4e14075_1569_42bc_b38f_776a269a4fcd.slice. Jan 20 15:10:55.642406 containerd[1658]: time="2026-01-20T15:10:55.642358632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jr4nz,Uid:c4e14075-1569-42bc-b38f-776a269a4fcd,Namespace:calico-system,Attempt:0,}" Jan 20 15:10:55.700300 kubelet[2881]: I0120 15:10:55.700037 2881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 15:10:55.702533 kubelet[2881]: E0120 15:10:55.702443 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:10:55.726151 containerd[1658]: time="2026-01-20T15:10:55.726054312Z" level=error msg="Failed to destroy network for sandbox \"a25ec10349e600ef09f3ff19a51ce762034079498c42399f0821ca08fd366e10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:55.730148 systemd[1]: run-netns-cni\x2dfd5a7917\x2dec56\x2dbabd\x2d8e85\x2d92597df40ef7.mount: Deactivated successfully. Jan 20 15:10:55.733257 containerd[1658]: time="2026-01-20T15:10:55.733216310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jr4nz,Uid:c4e14075-1569-42bc-b38f-776a269a4fcd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a25ec10349e600ef09f3ff19a51ce762034079498c42399f0821ca08fd366e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:55.733964 kubelet[2881]: E0120 15:10:55.733475 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a25ec10349e600ef09f3ff19a51ce762034079498c42399f0821ca08fd366e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 15:10:55.733964 kubelet[2881]: E0120 15:10:55.733529 2881 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a25ec10349e600ef09f3ff19a51ce762034079498c42399f0821ca08fd366e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jr4nz" Jan 20 15:10:55.733964 kubelet[2881]: E0120 15:10:55.733549 2881 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a25ec10349e600ef09f3ff19a51ce762034079498c42399f0821ca08fd366e10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jr4nz" Jan 20 15:10:55.734146 kubelet[2881]: E0120 15:10:55.733585 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a25ec10349e600ef09f3ff19a51ce762034079498c42399f0821ca08fd366e10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:10:55.749000 audit[3937]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:55.749000 audit[3937]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea34c05d0 a2=0 a3=7ffea34c05bc items=0 ppid=3041 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:55.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:55.767000 audit[3937]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:10:55.767000 audit[3937]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffea34c05d0 a2=0 a3=7ffea34c05bc items=0 ppid=3041 pid=3937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:10:55.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:10:55.818498 kubelet[2881]: E0120 15:10:55.818384 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:00.157342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056068798.mount: Deactivated successfully. Jan 20 15:11:00.366911 containerd[1658]: time="2026-01-20T15:11:00.366293805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:11:00.369065 containerd[1658]: time="2026-01-20T15:11:00.369038746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 15:11:00.370956 containerd[1658]: time="2026-01-20T15:11:00.370817187Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:11:00.374258 containerd[1658]: time="2026-01-20T15:11:00.374208299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 15:11:00.374941 containerd[1658]: time="2026-01-20T15:11:00.374828399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.54758542s" Jan 20 15:11:00.374941 containerd[1658]: time="2026-01-20T15:11:00.374936772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 15:11:00.404045 containerd[1658]: time="2026-01-20T15:11:00.403684450Z" level=info msg="CreateContainer within sandbox \"3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 15:11:00.423990 containerd[1658]: time="2026-01-20T15:11:00.423592858Z" level=info msg="Container 124992fdb4fe8bff160301362333392929dda284415c323ed78fbd1ca0f67d71: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:11:00.436347 containerd[1658]: time="2026-01-20T15:11:00.436273380Z" level=info msg="CreateContainer within sandbox \"3fa4a265d43d088dff4f984fb014b8c135450a3242ae031086b49fd8ba6aa6f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"124992fdb4fe8bff160301362333392929dda284415c323ed78fbd1ca0f67d71\"" Jan 20 15:11:00.438946 containerd[1658]: time="2026-01-20T15:11:00.437213624Z" level=info msg="StartContainer for \"124992fdb4fe8bff160301362333392929dda284415c323ed78fbd1ca0f67d71\"" Jan 20 15:11:00.439596 containerd[1658]: time="2026-01-20T15:11:00.439400674Z" level=info msg="connecting to shim 124992fdb4fe8bff160301362333392929dda284415c323ed78fbd1ca0f67d71" address="unix:///run/containerd/s/bb50a209c4e36824973e662c0ae1c5f4eb325febc11366cf2a5a0cf4ab7ac8b5" protocol=ttrpc version=3 Jan 20 15:11:00.479113 systemd[1]: Started cri-containerd-124992fdb4fe8bff160301362333392929dda284415c323ed78fbd1ca0f67d71.scope - libcontainer container 124992fdb4fe8bff160301362333392929dda284415c323ed78fbd1ca0f67d71. Jan 20 15:11:00.575914 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 15:11:00.576170 kernel: audit: type=1334 audit(1768921860.571:584): prog-id=176 op=LOAD Jan 20 15:11:00.571000 audit: BPF prog-id=176 op=LOAD Jan 20 15:11:00.571000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.586063 kernel: audit: type=1300 audit(1768921860.571:584): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.587815 kernel: audit: type=1327 audit(1768921860.571:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.571000 audit: BPF prog-id=177 op=LOAD Jan 20 15:11:00.596815 kernel: audit: type=1334 audit(1768921860.571:585): prog-id=177 op=LOAD Jan 20 15:11:00.571000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.614933 kernel: audit: type=1300 audit(1768921860.571:585): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.616080 kernel: audit: type=1327 audit(1768921860.571:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.571000 audit: BPF prog-id=177 op=UNLOAD Jan 20 15:11:00.618937 kernel: audit: type=1334 audit(1768921860.571:586): prog-id=177 op=UNLOAD Jan 20 15:11:00.619090 kernel: audit: type=1300 audit(1768921860.571:586): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.571000 audit[3945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.636347 kernel: audit: type=1327 audit(1768921860.571:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.636427 kernel: audit: type=1334 audit(1768921860.571:587): prog-id=176 op=UNLOAD Jan 20 15:11:00.571000 audit: BPF prog-id=176 op=UNLOAD Jan 20 15:11:00.571000 audit[3945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.571000 audit: BPF prog-id=178 op=LOAD Jan 20 15:11:00.571000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3392 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:00.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132343939326664623466653862666631363033303133363233333333 Jan 20 15:11:00.698118 containerd[1658]: time="2026-01-20T15:11:00.697906001Z" level=info msg="StartContainer for \"124992fdb4fe8bff160301362333392929dda284415c323ed78fbd1ca0f67d71\" returns successfully" Jan 20 15:11:00.771983 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 15:11:00.772078 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 15:11:00.848923 kubelet[2881]: E0120 15:11:00.846805 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:00.954550 kubelet[2881]: I0120 15:11:00.953557 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vkjpc" podStartSLOduration=1.203781568 podStartE2EDuration="14.953521521s" podCreationTimestamp="2026-01-20 15:10:46 +0000 UTC" firstStartedPulling="2026-01-20 15:10:46.627427186 +0000 UTC m=+20.155857058" lastFinishedPulling="2026-01-20 15:11:00.377167148 +0000 UTC m=+33.905597011" observedRunningTime="2026-01-20 15:11:00.87610557 +0000 UTC m=+34.404535442" watchObservedRunningTime="2026-01-20 15:11:00.953521521 +0000 UTC m=+34.481951395" Jan 20 15:11:01.002456 kubelet[2881]: I0120 15:11:01.002413 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-ca-bundle\") pod \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\" (UID: \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\") " Jan 20 15:11:01.003819 kubelet[2881]: I0120 15:11:01.003343 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-backend-key-pair\") pod \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\" (UID: \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\") " Jan 20 15:11:01.004076 kubelet[2881]: I0120 15:11:01.004055 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqz7d\" (UniqueName: \"kubernetes.io/projected/859e273f-5e38-4fb4-ada4-56bd2806f5ed-kube-api-access-mqz7d\") pod \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\" (UID: \"859e273f-5e38-4fb4-ada4-56bd2806f5ed\") " Jan 20 15:11:01.004591 kubelet[2881]: I0120 15:11:01.003256 2881 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "859e273f-5e38-4fb4-ada4-56bd2806f5ed" (UID: "859e273f-5e38-4fb4-ada4-56bd2806f5ed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 15:11:01.010502 kubelet[2881]: I0120 15:11:01.010470 2881 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859e273f-5e38-4fb4-ada4-56bd2806f5ed-kube-api-access-mqz7d" (OuterVolumeSpecName: "kube-api-access-mqz7d") pod "859e273f-5e38-4fb4-ada4-56bd2806f5ed" (UID: "859e273f-5e38-4fb4-ada4-56bd2806f5ed"). InnerVolumeSpecName "kube-api-access-mqz7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 15:11:01.019828 kubelet[2881]: I0120 15:11:01.019783 2881 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "859e273f-5e38-4fb4-ada4-56bd2806f5ed" (UID: "859e273f-5e38-4fb4-ada4-56bd2806f5ed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 15:11:01.105353 kubelet[2881]: I0120 15:11:01.105281 2881 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 15:11:01.105353 kubelet[2881]: I0120 15:11:01.105339 2881 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/859e273f-5e38-4fb4-ada4-56bd2806f5ed-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 15:11:01.105535 kubelet[2881]: I0120 15:11:01.105367 2881 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mqz7d\" (UniqueName: \"kubernetes.io/projected/859e273f-5e38-4fb4-ada4-56bd2806f5ed-kube-api-access-mqz7d\") on node \"localhost\" DevicePath \"\"" Jan 20 15:11:01.158676 systemd[1]: var-lib-kubelet-pods-859e273f\x2d5e38\x2d4fb4\x2dada4\x2d56bd2806f5ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmqz7d.mount: Deactivated successfully. Jan 20 15:11:01.159664 systemd[1]: var-lib-kubelet-pods-859e273f\x2d5e38\x2d4fb4\x2dada4\x2d56bd2806f5ed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 15:11:01.856397 systemd[1]: Removed slice kubepods-besteffort-pod859e273f_5e38_4fb4_ada4_56bd2806f5ed.slice - libcontainer container kubepods-besteffort-pod859e273f_5e38_4fb4_ada4_56bd2806f5ed.slice. Jan 20 15:11:01.942590 systemd[1]: Created slice kubepods-besteffort-pod314fd9f9_2d2b_4b58_a692_6f702aedf12f.slice - libcontainer container kubepods-besteffort-pod314fd9f9_2d2b_4b58_a692_6f702aedf12f.slice. Jan 20 15:11:02.014399 kubelet[2881]: I0120 15:11:02.014253 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/314fd9f9-2d2b-4b58-a692-6f702aedf12f-whisker-backend-key-pair\") pod \"whisker-6fd6f47785-wdj86\" (UID: \"314fd9f9-2d2b-4b58-a692-6f702aedf12f\") " pod="calico-system/whisker-6fd6f47785-wdj86" Jan 20 15:11:02.014399 kubelet[2881]: I0120 15:11:02.014330 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2bst\" (UniqueName: \"kubernetes.io/projected/314fd9f9-2d2b-4b58-a692-6f702aedf12f-kube-api-access-j2bst\") pod \"whisker-6fd6f47785-wdj86\" (UID: \"314fd9f9-2d2b-4b58-a692-6f702aedf12f\") " pod="calico-system/whisker-6fd6f47785-wdj86" Jan 20 15:11:02.014399 kubelet[2881]: I0120 15:11:02.014369 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/314fd9f9-2d2b-4b58-a692-6f702aedf12f-whisker-ca-bundle\") pod \"whisker-6fd6f47785-wdj86\" (UID: \"314fd9f9-2d2b-4b58-a692-6f702aedf12f\") " pod="calico-system/whisker-6fd6f47785-wdj86" Jan 20 15:11:02.151154 kubelet[2881]: I0120 15:11:02.150775 2881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 15:11:02.151829 kubelet[2881]: E0120 15:11:02.151299 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:02.250887 containerd[1658]: time="2026-01-20T15:11:02.250774264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd6f47785-wdj86,Uid:314fd9f9-2d2b-4b58-a692-6f702aedf12f,Namespace:calico-system,Attempt:0,}" Jan 20 15:11:02.645445 systemd-networkd[1319]: cali21e26f52dfb: Link UP Jan 20 15:11:02.650796 systemd-networkd[1319]: cali21e26f52dfb: Gained carrier Jan 20 15:11:02.681803 kubelet[2881]: I0120 15:11:02.681274 2881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859e273f-5e38-4fb4-ada4-56bd2806f5ed" path="/var/lib/kubelet/pods/859e273f-5e38-4fb4-ada4-56bd2806f5ed/volumes" Jan 20 15:11:02.710931 containerd[1658]: 2026-01-20 15:11:02.335 [INFO][4125] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 15:11:02.710931 containerd[1658]: 2026-01-20 15:11:02.381 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6fd6f47785--wdj86-eth0 whisker-6fd6f47785- calico-system 314fd9f9-2d2b-4b58-a692-6f702aedf12f 945 0 2026-01-20 15:11:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fd6f47785 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6fd6f47785-wdj86 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali21e26f52dfb [] [] }} ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-" Jan 20 15:11:02.710931 containerd[1658]: 2026-01-20 15:11:02.382 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" Jan 20 15:11:02.710931 containerd[1658]: 2026-01-20 15:11:02.510 [INFO][4153] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" HandleID="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Workload="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.512 [INFO][4153] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" HandleID="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Workload="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ec1e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6fd6f47785-wdj86", "timestamp":"2026-01-20 15:11:02.510735102 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.513 [INFO][4153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.513 [INFO][4153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.513 [INFO][4153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.542 [INFO][4153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" host="localhost" Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.557 [INFO][4153] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.568 [INFO][4153] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.575 [INFO][4153] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.584 [INFO][4153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:02.711316 containerd[1658]: 2026-01-20 15:11:02.585 [INFO][4153] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" host="localhost" Jan 20 15:11:02.711550 containerd[1658]: 2026-01-20 15:11:02.590 [INFO][4153] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a Jan 20 15:11:02.711550 containerd[1658]: 2026-01-20 15:11:02.602 [INFO][4153] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" host="localhost" Jan 20 15:11:02.711550 containerd[1658]: 2026-01-20 15:11:02.614 [INFO][4153] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" host="localhost" Jan 20 15:11:02.711550 containerd[1658]: 2026-01-20 15:11:02.614 [INFO][4153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" host="localhost" Jan 20 15:11:02.711550 containerd[1658]: 2026-01-20 15:11:02.615 [INFO][4153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:02.711550 containerd[1658]: 2026-01-20 15:11:02.615 [INFO][4153] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" HandleID="k8s-pod-network.c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Workload="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" Jan 20 15:11:02.711661 containerd[1658]: 2026-01-20 15:11:02.625 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fd6f47785--wdj86-eth0", GenerateName:"whisker-6fd6f47785-", Namespace:"calico-system", SelfLink:"", UID:"314fd9f9-2d2b-4b58-a692-6f702aedf12f", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 11, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd6f47785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6fd6f47785-wdj86", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali21e26f52dfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:02.711661 containerd[1658]: 2026-01-20 15:11:02.625 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" Jan 20 15:11:02.711788 containerd[1658]: 2026-01-20 15:11:02.625 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21e26f52dfb ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" Jan 20 15:11:02.711788 containerd[1658]: 2026-01-20 15:11:02.658 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" Jan 20 15:11:02.711826 containerd[1658]: 2026-01-20 15:11:02.666 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6fd6f47785--wdj86-eth0", GenerateName:"whisker-6fd6f47785-", Namespace:"calico-system", SelfLink:"", UID:"314fd9f9-2d2b-4b58-a692-6f702aedf12f", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 11, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd6f47785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a", Pod:"whisker-6fd6f47785-wdj86", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali21e26f52dfb", MAC:"46:77:ca:74:b9:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:02.712369 containerd[1658]: 2026-01-20 15:11:02.699 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" Namespace="calico-system" Pod="whisker-6fd6f47785-wdj86" WorkloadEndpoint="localhost-k8s-whisker--6fd6f47785--wdj86-eth0" Jan 20 15:11:02.743000 audit: BPF prog-id=179 op=LOAD Jan 20 15:11:02.743000 audit[4231]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd7967e70 a2=98 a3=1fffffffffffffff items=0 ppid=4064 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.743000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 15:11:02.743000 audit: BPF prog-id=179 op=UNLOAD Jan 20 15:11:02.743000 audit[4231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd7967e40 a3=0 items=0 ppid=4064 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.743000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 15:11:02.743000 audit: BPF prog-id=180 op=LOAD Jan 20 15:11:02.743000 audit[4231]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd7967d50 a2=94 a3=3 items=0 ppid=4064 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.743000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 15:11:02.743000 audit: BPF prog-id=180 op=UNLOAD Jan 20 15:11:02.743000 audit[4231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd7967d50 a2=94 a3=3 items=0 ppid=4064 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.743000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 15:11:02.743000 audit: BPF prog-id=181 op=LOAD Jan 20 15:11:02.743000 audit[4231]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd7967d90 a2=94 a3=7ffcd7967f70 items=0 ppid=4064 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.743000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 15:11:02.743000 audit: BPF prog-id=181 op=UNLOAD Jan 20 15:11:02.743000 audit[4231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd7967d90 a2=94 a3=7ffcd7967f70 items=0 ppid=4064 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.743000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 15:11:02.747000 audit: BPF prog-id=182 op=LOAD Jan 20 15:11:02.747000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd79e98db0 a2=98 a3=3 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.747000 audit: BPF prog-id=182 op=UNLOAD Jan 20 15:11:02.747000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd79e98d80 a3=0 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.747000 audit: BPF prog-id=183 op=LOAD Jan 20 15:11:02.747000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd79e98ba0 a2=94 a3=54428f items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.747000 audit: BPF prog-id=183 op=UNLOAD Jan 20 15:11:02.747000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd79e98ba0 a2=94 a3=54428f items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.747000 audit: BPF prog-id=184 op=LOAD Jan 20 15:11:02.747000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd79e98bd0 a2=94 a3=2 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.747000 audit: BPF prog-id=184 op=UNLOAD Jan 20 15:11:02.747000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd79e98bd0 a2=0 a3=2 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.815649 containerd[1658]: time="2026-01-20T15:11:02.815570932Z" level=info msg="connecting to shim c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a" address="unix:///run/containerd/s/672100d27ab0a7c7a9237edd6e2b082930abf2cb1bd9abd66befc534e8a1858d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:02.863174 systemd[1]: Started cri-containerd-c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a.scope - libcontainer container c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a. Jan 20 15:11:02.886000 audit: BPF prog-id=185 op=LOAD Jan 20 15:11:02.887000 audit: BPF prog-id=186 op=LOAD Jan 20 15:11:02.887000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4242 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339316265313965393539333534303130653633373638303133333262 Jan 20 15:11:02.887000 audit: BPF prog-id=186 op=UNLOAD Jan 20 15:11:02.887000 audit[4254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339316265313965393539333534303130653633373638303133333262 Jan 20 15:11:02.887000 audit: BPF prog-id=187 op=LOAD Jan 20 15:11:02.887000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4242 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339316265313965393539333534303130653633373638303133333262 Jan 20 15:11:02.887000 audit: BPF prog-id=188 op=LOAD Jan 20 15:11:02.887000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4242 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.887000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339316265313965393539333534303130653633373638303133333262 Jan 20 15:11:02.888000 audit: BPF prog-id=188 op=UNLOAD Jan 20 15:11:02.888000 audit[4254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339316265313965393539333534303130653633373638303133333262 Jan 20 15:11:02.888000 audit: BPF prog-id=187 op=UNLOAD Jan 20 15:11:02.888000 audit[4254]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4242 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339316265313965393539333534303130653633373638303133333262 Jan 20 15:11:02.888000 audit: BPF prog-id=189 op=LOAD Jan 20 15:11:02.888000 audit[4254]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4242 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339316265313965393539333534303130653633373638303133333262 Jan 20 15:11:02.889961 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:02.940978 containerd[1658]: time="2026-01-20T15:11:02.940400883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd6f47785-wdj86,Uid:314fd9f9-2d2b-4b58-a692-6f702aedf12f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c91be19e959354010e6376801332b2ea23995aed1ca72bdc8749f85c8d3d423a\"" Jan 20 15:11:02.947120 containerd[1658]: time="2026-01-20T15:11:02.946774153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 15:11:02.984000 audit: BPF prog-id=190 op=LOAD Jan 20 15:11:02.984000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd79e98a90 a2=94 a3=1 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.984000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.984000 audit: BPF prog-id=190 op=UNLOAD Jan 20 15:11:02.984000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd79e98a90 a2=94 a3=1 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.984000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.994000 audit: BPF prog-id=191 op=LOAD Jan 20 15:11:02.994000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd79e98a80 a2=94 a3=4 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.994000 audit: BPF prog-id=191 op=UNLOAD Jan 20 15:11:02.994000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd79e98a80 a2=0 a3=4 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.995000 audit: BPF prog-id=192 op=LOAD Jan 20 15:11:02.995000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd79e988e0 a2=94 a3=5 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.995000 audit: BPF prog-id=192 op=UNLOAD Jan 20 15:11:02.995000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd79e988e0 a2=0 a3=5 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.995000 audit: BPF prog-id=193 op=LOAD Jan 20 15:11:02.995000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd79e98b00 a2=94 a3=6 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.995000 audit: BPF prog-id=193 op=UNLOAD Jan 20 15:11:02.995000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd79e98b00 a2=0 a3=6 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.995000 audit: BPF prog-id=194 op=LOAD Jan 20 15:11:02.995000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd79e982b0 a2=94 a3=88 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.996000 audit: BPF prog-id=195 op=LOAD Jan 20 15:11:02.996000 audit[4232]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd79e98130 a2=94 a3=2 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.996000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.996000 audit: BPF prog-id=195 op=UNLOAD Jan 20 15:11:02.996000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd79e98160 a2=0 a3=7ffd79e98260 items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.996000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:02.996000 audit: BPF prog-id=194 op=UNLOAD Jan 20 15:11:02.996000 audit[4232]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=29187d10 a2=0 a3=928bacb098e2cd9d items=0 ppid=4064 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:02.996000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 15:11:03.010000 audit: BPF prog-id=196 op=LOAD Jan 20 15:11:03.010000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdedcb870 a2=98 a3=1999999999999999 items=0 ppid=4064 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 15:11:03.010000 audit: BPF prog-id=196 op=UNLOAD Jan 20 15:11:03.010000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffdedcb840 a3=0 items=0 ppid=4064 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 15:11:03.010000 audit: BPF prog-id=197 op=LOAD Jan 20 15:11:03.010000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdedcb750 a2=94 a3=ffff items=0 ppid=4064 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 15:11:03.010000 audit: BPF prog-id=197 op=UNLOAD Jan 20 15:11:03.010000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffdedcb750 a2=94 a3=ffff items=0 ppid=4064 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 15:11:03.010000 audit: BPF prog-id=198 op=LOAD Jan 20 15:11:03.010000 audit[4282]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdedcb790 a2=94 a3=7fffdedcb970 items=0 ppid=4064 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 15:11:03.010000 audit: BPF prog-id=198 op=UNLOAD Jan 20 15:11:03.010000 audit[4282]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffdedcb790 a2=94 a3=7fffdedcb970 items=0 ppid=4064 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.010000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 15:11:03.060401 containerd[1658]: time="2026-01-20T15:11:03.060259750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:03.062323 containerd[1658]: time="2026-01-20T15:11:03.062201345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 15:11:03.062792 containerd[1658]: time="2026-01-20T15:11:03.062463200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:03.063052 kubelet[2881]: E0120 15:11:03.062996 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:11:03.063052 kubelet[2881]: E0120 15:11:03.063063 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:11:03.083743 kubelet[2881]: E0120 15:11:03.083638 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a8dee0b7dc7f4ac08b9f27fb940bf054,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:03.088162 containerd[1658]: time="2026-01-20T15:11:03.088123317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 15:11:03.106440 systemd-networkd[1319]: vxlan.calico: Link UP Jan 20 15:11:03.106926 systemd-networkd[1319]: vxlan.calico: Gained carrier Jan 20 15:11:03.134000 audit: BPF prog-id=199 op=LOAD Jan 20 15:11:03.134000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc48fdab00 a2=98 a3=0 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.134000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.134000 audit: BPF prog-id=199 op=UNLOAD Jan 20 15:11:03.134000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc48fdaad0 a3=0 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.134000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.134000 audit: BPF prog-id=200 op=LOAD Jan 20 15:11:03.134000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc48fda910 a2=94 a3=54428f items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.134000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.134000 audit: BPF prog-id=200 op=UNLOAD Jan 20 15:11:03.134000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc48fda910 a2=94 a3=54428f items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.134000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.134000 audit: BPF prog-id=201 op=LOAD Jan 20 15:11:03.134000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc48fda940 a2=94 a3=2 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.134000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.134000 audit: BPF prog-id=201 op=UNLOAD Jan 20 15:11:03.134000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc48fda940 a2=0 a3=2 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.134000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.135000 audit: BPF prog-id=202 op=LOAD Jan 20 15:11:03.135000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc48fda6f0 a2=94 a3=4 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.135000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.135000 audit: BPF prog-id=202 op=UNLOAD Jan 20 15:11:03.135000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc48fda6f0 a2=94 a3=4 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.135000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.135000 audit: BPF prog-id=203 op=LOAD Jan 20 15:11:03.135000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc48fda7f0 a2=94 a3=7ffc48fda970 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.135000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.135000 audit: BPF prog-id=203 op=UNLOAD Jan 20 15:11:03.135000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc48fda7f0 a2=0 a3=7ffc48fda970 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.135000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.136000 audit: BPF prog-id=204 op=LOAD Jan 20 15:11:03.136000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc48fd9f20 a2=94 a3=2 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.136000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.136000 audit: BPF prog-id=204 op=UNLOAD Jan 20 15:11:03.136000 audit[4307]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc48fd9f20 a2=0 a3=2 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.136000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.136000 audit: BPF prog-id=205 op=LOAD Jan 20 15:11:03.136000 audit[4307]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc48fda020 a2=94 a3=30 items=0 ppid=4064 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.136000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 15:11:03.148000 audit: BPF prog-id=206 op=LOAD Jan 20 15:11:03.148000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc1991bfb0 a2=98 a3=0 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.149000 audit: BPF prog-id=206 op=UNLOAD Jan 20 15:11:03.149000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc1991bf80 a3=0 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.149000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.149000 audit: BPF prog-id=207 op=LOAD Jan 20 15:11:03.149000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1991bda0 a2=94 a3=54428f items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.149000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.149000 audit: BPF prog-id=207 op=UNLOAD Jan 20 15:11:03.149000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc1991bda0 a2=94 a3=54428f items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.149000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.149000 audit: BPF prog-id=208 op=LOAD Jan 20 15:11:03.149000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1991bdd0 a2=94 a3=2 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.149000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.149000 audit: BPF prog-id=208 op=UNLOAD Jan 20 15:11:03.149000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc1991bdd0 a2=0 a3=2 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.149000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.188471 containerd[1658]: time="2026-01-20T15:11:03.188245459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:03.189937 containerd[1658]: time="2026-01-20T15:11:03.189893236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 15:11:03.190097 containerd[1658]: time="2026-01-20T15:11:03.189998396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:03.190229 kubelet[2881]: E0120 15:11:03.190156 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:11:03.190229 kubelet[2881]: E0120 15:11:03.190215 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:11:03.190534 kubelet[2881]: E0120 15:11:03.190353 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:03.192097 kubelet[2881]: E0120 15:11:03.191965 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:11:03.349000 audit: BPF prog-id=209 op=LOAD Jan 20 15:11:03.349000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1991bc90 a2=94 a3=1 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.349000 audit: BPF prog-id=209 op=UNLOAD Jan 20 15:11:03.349000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc1991bc90 a2=94 a3=1 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.359000 audit: BPF prog-id=210 op=LOAD Jan 20 15:11:03.359000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc1991bc80 a2=94 a3=4 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.359000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.359000 audit: BPF prog-id=210 op=UNLOAD Jan 20 15:11:03.359000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc1991bc80 a2=0 a3=4 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.359000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.360000 audit: BPF prog-id=211 op=LOAD Jan 20 15:11:03.360000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc1991bae0 a2=94 a3=5 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.360000 audit: BPF prog-id=211 op=UNLOAD Jan 20 15:11:03.360000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc1991bae0 a2=0 a3=5 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.360000 audit: BPF prog-id=212 op=LOAD Jan 20 15:11:03.360000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc1991bd00 a2=94 a3=6 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.360000 audit: BPF prog-id=212 op=UNLOAD Jan 20 15:11:03.360000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc1991bd00 a2=0 a3=6 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.360000 audit: BPF prog-id=213 op=LOAD Jan 20 15:11:03.360000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc1991b4b0 a2=94 a3=88 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.360000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.361000 audit: BPF prog-id=214 op=LOAD Jan 20 15:11:03.361000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc1991b330 a2=94 a3=2 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.361000 audit: BPF prog-id=214 op=UNLOAD Jan 20 15:11:03.361000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc1991b360 a2=0 a3=7ffc1991b460 items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.361000 audit: BPF prog-id=213 op=UNLOAD Jan 20 15:11:03.361000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=ece7d10 a2=0 a3=a2ff47d29ba6b21a items=0 ppid=4064 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.361000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 15:11:03.375000 audit: BPF prog-id=205 op=UNLOAD Jan 20 15:11:03.375000 audit[4064]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00147e780 a2=0 a3=0 items=0 ppid=4038 pid=4064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.375000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 15:11:03.448000 audit[4340]: NETFILTER_CFG table=nat:119 family=2 entries=15 op=nft_register_chain pid=4340 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:03.448000 audit[4340]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe04719d60 a2=0 a3=7ffe04719d4c items=0 ppid=4064 pid=4340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.448000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:03.449000 audit[4343]: NETFILTER_CFG table=mangle:120 family=2 entries=16 op=nft_register_chain pid=4343 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:03.449000 audit[4343]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd6413c090 a2=0 a3=7ffd6413c07c items=0 ppid=4064 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.449000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:03.465000 audit[4339]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4339 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:03.465000 audit[4339]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffca5c45960 a2=0 a3=7ffca5c4594c items=0 ppid=4064 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:03.469000 audit[4341]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4341 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:03.469000 audit[4341]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe8d90de30 a2=0 a3=7ffe8d90de1c items=0 ppid=4064 pid=4341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.469000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:03.877036 kubelet[2881]: E0120 15:11:03.876929 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:11:03.911000 audit[4352]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:03.911000 audit[4352]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeaae7bc20 a2=0 a3=7ffeaae7bc0c items=0 ppid=3041 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:03.923000 audit[4352]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:03.923000 audit[4352]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeaae7bc20 a2=0 a3=0 items=0 ppid=3041 pid=4352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:03.923000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:03.944217 systemd-networkd[1319]: cali21e26f52dfb: Gained IPv6LL Jan 20 15:11:04.879086 kubelet[2881]: E0120 15:11:04.879024 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:11:04.968144 systemd-networkd[1319]: vxlan.calico: Gained IPv6LL Jan 20 15:11:06.633329 containerd[1658]: time="2026-01-20T15:11:06.633252717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf79bffbc-qt64q,Uid:746e4480-dd89-4ee6-ba05-3e214024a83b,Namespace:calico-system,Attempt:0,}" Jan 20 15:11:06.808461 systemd-networkd[1319]: cali5b59e37adb5: Link UP Jan 20 15:11:06.808979 systemd-networkd[1319]: cali5b59e37adb5: Gained carrier Jan 20 15:11:06.832638 containerd[1658]: 2026-01-20 15:11:06.696 [INFO][4355] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0 calico-kube-controllers-6bf79bffbc- calico-system 746e4480-dd89-4ee6-ba05-3e214024a83b 864 0 2026-01-20 15:10:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bf79bffbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bf79bffbc-qt64q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5b59e37adb5 [] [] }} ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-" Jan 20 15:11:06.832638 containerd[1658]: 2026-01-20 15:11:06.696 [INFO][4355] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" Jan 20 15:11:06.832638 containerd[1658]: 2026-01-20 15:11:06.735 [INFO][4371] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" HandleID="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Workload="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.735 [INFO][4371] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" HandleID="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Workload="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042b260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bf79bffbc-qt64q", "timestamp":"2026-01-20 15:11:06.735590951 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.736 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.736 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.736 [INFO][4371] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.745 [INFO][4371] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" host="localhost" Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.754 [INFO][4371] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.765 [INFO][4371] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.780 [INFO][4371] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.784 [INFO][4371] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:06.833082 containerd[1658]: 2026-01-20 15:11:06.784 [INFO][4371] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" host="localhost" Jan 20 15:11:06.833483 containerd[1658]: 2026-01-20 15:11:06.787 [INFO][4371] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074 Jan 20 15:11:06.833483 containerd[1658]: 2026-01-20 15:11:06.793 [INFO][4371] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" host="localhost" Jan 20 15:11:06.833483 containerd[1658]: 2026-01-20 15:11:06.800 [INFO][4371] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" host="localhost" Jan 20 15:11:06.833483 containerd[1658]: 2026-01-20 15:11:06.800 [INFO][4371] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" host="localhost" Jan 20 15:11:06.833483 containerd[1658]: 2026-01-20 15:11:06.800 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:06.833483 containerd[1658]: 2026-01-20 15:11:06.800 [INFO][4371] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" HandleID="k8s-pod-network.4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Workload="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" Jan 20 15:11:06.833654 containerd[1658]: 2026-01-20 15:11:06.804 [INFO][4355] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0", GenerateName:"calico-kube-controllers-6bf79bffbc-", Namespace:"calico-system", SelfLink:"", UID:"746e4480-dd89-4ee6-ba05-3e214024a83b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf79bffbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bf79bffbc-qt64q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b59e37adb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:06.833786 containerd[1658]: 2026-01-20 15:11:06.804 [INFO][4355] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" Jan 20 15:11:06.833786 containerd[1658]: 2026-01-20 15:11:06.804 [INFO][4355] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b59e37adb5 ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" Jan 20 15:11:06.833786 containerd[1658]: 2026-01-20 15:11:06.809 [INFO][4355] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" Jan 20 15:11:06.833932 containerd[1658]: 2026-01-20 15:11:06.810 [INFO][4355] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0", GenerateName:"calico-kube-controllers-6bf79bffbc-", Namespace:"calico-system", SelfLink:"", UID:"746e4480-dd89-4ee6-ba05-3e214024a83b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bf79bffbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074", Pod:"calico-kube-controllers-6bf79bffbc-qt64q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5b59e37adb5", MAC:"2e:5b:85:02:aa:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:06.834072 containerd[1658]: 2026-01-20 15:11:06.823 [INFO][4355] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" Namespace="calico-system" Pod="calico-kube-controllers-6bf79bffbc-qt64q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bf79bffbc--qt64q-eth0" Jan 20 15:11:06.864948 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 20 15:11:06.865105 kernel: audit: type=1325 audit(1768921866.858:665): table=filter:125 family=2 entries=36 op=nft_register_chain pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:06.858000 audit[4389]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:06.858000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffde8d05280 a2=0 a3=7ffde8d0526c items=0 ppid=4064 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.885175 containerd[1658]: time="2026-01-20T15:11:06.883663106Z" level=info msg="connecting to shim 4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074" address="unix:///run/containerd/s/d5f85d4b1768abcfef8ef1fae757bfa086cf4bd15c4238eee49ecb01fd2a577e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:06.858000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:06.893459 kernel: audit: type=1300 audit(1768921866.858:665): arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffde8d05280 a2=0 a3=7ffde8d0526c items=0 ppid=4064 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.893529 kernel: audit: type=1327 audit(1768921866.858:665): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:06.930121 systemd[1]: Started cri-containerd-4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074.scope - libcontainer container 4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074. Jan 20 15:11:06.955000 audit: BPF prog-id=215 op=LOAD Jan 20 15:11:06.966911 kernel: audit: type=1334 audit(1768921866.955:666): prog-id=215 op=LOAD Jan 20 15:11:06.967140 kernel: audit: type=1334 audit(1768921866.958:667): prog-id=216 op=LOAD Jan 20 15:11:06.958000 audit: BPF prog-id=216 op=LOAD Jan 20 15:11:06.958000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.967925 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:06.988344 kernel: audit: type=1300 audit(1768921866.958:667): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.988451 kernel: audit: type=1327 audit(1768921866.958:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: BPF prog-id=216 op=UNLOAD Jan 20 15:11:06.992016 kernel: audit: type=1334 audit(1768921866.958:668): prog-id=216 op=UNLOAD Jan 20 15:11:06.992072 kernel: audit: type=1300 audit(1768921866.958:668): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.958000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:07.005054 kernel: audit: type=1327 audit(1768921866.958:668): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: BPF prog-id=217 op=LOAD Jan 20 15:11:06.958000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: BPF prog-id=218 op=LOAD Jan 20 15:11:06.958000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: BPF prog-id=218 op=UNLOAD Jan 20 15:11:06.958000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: BPF prog-id=217 op=UNLOAD Jan 20 15:11:06.958000 audit[4410]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:06.958000 audit: BPF prog-id=219 op=LOAD Jan 20 15:11:06.958000 audit[4410]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4398 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:06.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323261663862333863323761333734663938336465393461363930 Jan 20 15:11:07.026032 containerd[1658]: time="2026-01-20T15:11:07.025961322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bf79bffbc-qt64q,Uid:746e4480-dd89-4ee6-ba05-3e214024a83b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4022af8b38c27a374f983de94a6908c2e9e1131af83db0274baabe049223c074\"" Jan 20 15:11:07.028595 containerd[1658]: time="2026-01-20T15:11:07.028482599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 15:11:07.105884 containerd[1658]: time="2026-01-20T15:11:07.105798423Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:07.107450 containerd[1658]: time="2026-01-20T15:11:07.107406895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 15:11:07.107554 containerd[1658]: time="2026-01-20T15:11:07.107474904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:07.107817 kubelet[2881]: E0120 15:11:07.107729 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:11:07.107817 kubelet[2881]: E0120 15:11:07.107779 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:11:07.108487 kubelet[2881]: E0120 15:11:07.107963 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztvjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bf79bffbc-qt64q_calico-system(746e4480-dd89-4ee6-ba05-3e214024a83b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:07.109492 kubelet[2881]: E0120 15:11:07.109358 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:11:07.888396 kubelet[2881]: E0120 15:11:07.887663 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:11:08.423207 systemd-networkd[1319]: cali5b59e37adb5: Gained IPv6LL Jan 20 15:11:08.636402 containerd[1658]: time="2026-01-20T15:11:08.636294898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmlng,Uid:5fa31347-4392-4f2f-a0ac-7346e7069fc9,Namespace:calico-system,Attempt:0,}" Jan 20 15:11:08.637033 containerd[1658]: time="2026-01-20T15:11:08.636560339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jr4nz,Uid:c4e14075-1569-42bc-b38f-776a269a4fcd,Namespace:calico-system,Attempt:0,}" Jan 20 15:11:08.637033 containerd[1658]: time="2026-01-20T15:11:08.636769218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-l6nzr,Uid:26e4b1e4-d471-4e91-bbfc-9aa64bff08f3,Namespace:calico-apiserver,Attempt:0,}" Jan 20 15:11:08.872590 systemd-networkd[1319]: cali732238e58d6: Link UP Jan 20 15:11:08.879417 systemd-networkd[1319]: cali732238e58d6: Gained carrier Jan 20 15:11:08.891086 kubelet[2881]: E0120 15:11:08.890994 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:11:08.914045 containerd[1658]: 2026-01-20 15:11:08.726 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0 calico-apiserver-6848f96b7- calico-apiserver 26e4b1e4-d471-4e91-bbfc-9aa64bff08f3 868 0 2026-01-20 15:10:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6848f96b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6848f96b7-l6nzr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali732238e58d6 [] [] }} ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-" Jan 20 15:11:08.914045 containerd[1658]: 2026-01-20 15:11:08.727 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" Jan 20 15:11:08.914045 containerd[1658]: 2026-01-20 15:11:08.792 [INFO][4486] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" HandleID="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Workload="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.794 [INFO][4486] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" HandleID="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Workload="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b3ca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6848f96b7-l6nzr", "timestamp":"2026-01-20 15:11:08.792944621 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.794 [INFO][4486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.794 [INFO][4486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.794 [INFO][4486] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.808 [INFO][4486] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" host="localhost" Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.818 [INFO][4486] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.828 [INFO][4486] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.832 [INFO][4486] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.836 [INFO][4486] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:08.914407 containerd[1658]: 2026-01-20 15:11:08.836 [INFO][4486] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" host="localhost" Jan 20 15:11:08.915229 containerd[1658]: 2026-01-20 15:11:08.839 [INFO][4486] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826 Jan 20 15:11:08.915229 containerd[1658]: 2026-01-20 15:11:08.847 [INFO][4486] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" host="localhost" Jan 20 15:11:08.915229 containerd[1658]: 2026-01-20 15:11:08.865 [INFO][4486] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" host="localhost" Jan 20 15:11:08.915229 containerd[1658]: 2026-01-20 15:11:08.865 [INFO][4486] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" host="localhost" Jan 20 15:11:08.915229 containerd[1658]: 2026-01-20 15:11:08.865 [INFO][4486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:08.915229 containerd[1658]: 2026-01-20 15:11:08.865 [INFO][4486] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" HandleID="k8s-pod-network.249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Workload="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" Jan 20 15:11:08.915811 containerd[1658]: 2026-01-20 15:11:08.869 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0", GenerateName:"calico-apiserver-6848f96b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"26e4b1e4-d471-4e91-bbfc-9aa64bff08f3", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6848f96b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6848f96b7-l6nzr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali732238e58d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:08.916321 containerd[1658]: 2026-01-20 15:11:08.869 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" Jan 20 15:11:08.916321 containerd[1658]: 2026-01-20 15:11:08.869 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali732238e58d6 ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" Jan 20 15:11:08.916321 containerd[1658]: 2026-01-20 15:11:08.879 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" Jan 20 15:11:08.916397 containerd[1658]: 2026-01-20 15:11:08.880 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0", GenerateName:"calico-apiserver-6848f96b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"26e4b1e4-d471-4e91-bbfc-9aa64bff08f3", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6848f96b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826", Pod:"calico-apiserver-6848f96b7-l6nzr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali732238e58d6", MAC:"7e:6f:ae:ed:4a:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:08.917188 containerd[1658]: 2026-01-20 15:11:08.904 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-l6nzr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--l6nzr-eth0" Jan 20 15:11:08.930000 audit[4520]: NETFILTER_CFG table=filter:126 family=2 entries=54 op=nft_register_chain pid=4520 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:08.930000 audit[4520]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffd875acdb0 a2=0 a3=7ffd875acd9c items=0 ppid=4064 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:08.930000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:08.984642 containerd[1658]: time="2026-01-20T15:11:08.984577917Z" level=info msg="connecting to shim 249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826" address="unix:///run/containerd/s/d97010e555d9498e4e3aff67373cddecb473bf2ed10ec946302336295684a393" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:09.001510 systemd-networkd[1319]: cali44e6549eb5d: Link UP Jan 20 15:11:09.004081 systemd-networkd[1319]: cali44e6549eb5d: Gained carrier Jan 20 15:11:09.043779 containerd[1658]: 2026-01-20 15:11:08.726 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--lmlng-eth0 goldmane-666569f655- calico-system 5fa31347-4392-4f2f-a0ac-7346e7069fc9 867 0 2026-01-20 15:10:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-lmlng eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali44e6549eb5d [] [] }} ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-" Jan 20 15:11:09.043779 containerd[1658]: 2026-01-20 15:11:08.726 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-eth0" Jan 20 15:11:09.043779 containerd[1658]: 2026-01-20 15:11:08.801 [INFO][4488] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" HandleID="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Workload="localhost-k8s-goldmane--666569f655--lmlng-eth0" Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.801 [INFO][4488] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" HandleID="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Workload="localhost-k8s-goldmane--666569f655--lmlng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c0c10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-lmlng", "timestamp":"2026-01-20 15:11:08.801066456 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.801 [INFO][4488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.865 [INFO][4488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.866 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.908 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" host="localhost" Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.924 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.934 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.937 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.941 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:09.044057 containerd[1658]: 2026-01-20 15:11:08.941 [INFO][4488] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" host="localhost" Jan 20 15:11:09.044288 containerd[1658]: 2026-01-20 15:11:08.943 [INFO][4488] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c Jan 20 15:11:09.044288 containerd[1658]: 2026-01-20 15:11:08.949 [INFO][4488] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" host="localhost" Jan 20 15:11:09.044288 containerd[1658]: 2026-01-20 15:11:08.975 [INFO][4488] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" host="localhost" Jan 20 15:11:09.044288 containerd[1658]: 2026-01-20 15:11:08.977 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" host="localhost" Jan 20 15:11:09.044288 containerd[1658]: 2026-01-20 15:11:08.977 [INFO][4488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:09.044288 containerd[1658]: 2026-01-20 15:11:08.977 [INFO][4488] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" HandleID="k8s-pod-network.5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Workload="localhost-k8s-goldmane--666569f655--lmlng-eth0" Jan 20 15:11:09.045311 containerd[1658]: 2026-01-20 15:11:08.989 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lmlng-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5fa31347-4392-4f2f-a0ac-7346e7069fc9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-lmlng", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44e6549eb5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:09.045311 containerd[1658]: 2026-01-20 15:11:08.989 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-eth0" Jan 20 15:11:09.045498 containerd[1658]: 2026-01-20 15:11:08.989 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44e6549eb5d ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-eth0" Jan 20 15:11:09.045498 containerd[1658]: 2026-01-20 15:11:09.006 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-eth0" Jan 20 15:11:09.045561 containerd[1658]: 2026-01-20 15:11:09.007 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--lmlng-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5fa31347-4392-4f2f-a0ac-7346e7069fc9", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c", Pod:"goldmane-666569f655-lmlng", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali44e6549eb5d", MAC:"96:04:cf:b1:4b:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:09.045646 containerd[1658]: 2026-01-20 15:11:09.037 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" Namespace="calico-system" Pod="goldmane-666569f655-lmlng" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--lmlng-eth0" Jan 20 15:11:09.053631 systemd[1]: Started cri-containerd-249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826.scope - libcontainer container 249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826. Jan 20 15:11:09.080000 audit[4569]: NETFILTER_CFG table=filter:127 family=2 entries=52 op=nft_register_chain pid=4569 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:09.080000 audit[4569]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7ffd76acf9e0 a2=0 a3=7ffd76acf9cc items=0 ppid=4064 pid=4569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.080000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:09.088000 audit: BPF prog-id=220 op=LOAD Jan 20 15:11:09.088000 audit: BPF prog-id=221 op=LOAD Jan 20 15:11:09.088000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4529 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234396634313766313438336333373461383939323633353938623434 Jan 20 15:11:09.088000 audit: BPF prog-id=221 op=UNLOAD Jan 20 15:11:09.088000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4529 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.088000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234396634313766313438336333373461383939323633353938623434 Jan 20 15:11:09.089000 audit: BPF prog-id=222 op=LOAD Jan 20 15:11:09.089000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4529 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234396634313766313438336333373461383939323633353938623434 Jan 20 15:11:09.089000 audit: BPF prog-id=223 op=LOAD Jan 20 15:11:09.089000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4529 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234396634313766313438336333373461383939323633353938623434 Jan 20 15:11:09.089000 audit: BPF prog-id=223 op=UNLOAD Jan 20 15:11:09.089000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4529 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234396634313766313438336333373461383939323633353938623434 Jan 20 15:11:09.089000 audit: BPF prog-id=222 op=UNLOAD Jan 20 15:11:09.089000 audit[4541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4529 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234396634313766313438336333373461383939323633353938623434 Jan 20 15:11:09.089000 audit: BPF prog-id=224 op=LOAD Jan 20 15:11:09.089000 audit[4541]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4529 pid=4541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.089000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234396634313766313438336333373461383939323633353938623434 Jan 20 15:11:09.091715 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:09.119751 systemd-networkd[1319]: cali92ce71335d4: Link UP Jan 20 15:11:09.122117 systemd-networkd[1319]: cali92ce71335d4: Gained carrier Jan 20 15:11:09.126169 containerd[1658]: time="2026-01-20T15:11:09.126054449Z" level=info msg="connecting to shim 5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c" address="unix:///run/containerd/s/26c05f5af8ecafd05be66cd1b3035493ad562d3de7db028313d8193194c814a1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:09.177223 containerd[1658]: 2026-01-20 15:11:08.732 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jr4nz-eth0 csi-node-driver- calico-system c4e14075-1569-42bc-b38f-776a269a4fcd 760 0 2026-01-20 15:10:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jr4nz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali92ce71335d4 [] [] }} ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-" Jan 20 15:11:09.177223 containerd[1658]: 2026-01-20 15:11:08.733 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-eth0" Jan 20 15:11:09.177223 containerd[1658]: 2026-01-20 15:11:08.806 [INFO][4491] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" HandleID="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Workload="localhost-k8s-csi--node--driver--jr4nz-eth0" Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:08.806 [INFO][4491] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" HandleID="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Workload="localhost-k8s-csi--node--driver--jr4nz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005080c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jr4nz", "timestamp":"2026-01-20 15:11:08.806163625 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:08.806 [INFO][4491] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:08.977 [INFO][4491] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:08.977 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:09.019 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" host="localhost" Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:09.038 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:09.049 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:09.067 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:09.073 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:09.177497 containerd[1658]: 2026-01-20 15:11:09.073 [INFO][4491] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" host="localhost" Jan 20 15:11:09.177809 containerd[1658]: 2026-01-20 15:11:09.077 [INFO][4491] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f Jan 20 15:11:09.177809 containerd[1658]: 2026-01-20 15:11:09.086 [INFO][4491] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" host="localhost" Jan 20 15:11:09.177809 containerd[1658]: 2026-01-20 15:11:09.108 [INFO][4491] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" host="localhost" Jan 20 15:11:09.177809 containerd[1658]: 2026-01-20 15:11:09.109 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" host="localhost" Jan 20 15:11:09.177809 containerd[1658]: 2026-01-20 15:11:09.110 [INFO][4491] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:09.177809 containerd[1658]: 2026-01-20 15:11:09.110 [INFO][4491] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" HandleID="k8s-pod-network.f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Workload="localhost-k8s-csi--node--driver--jr4nz-eth0" Jan 20 15:11:09.178379 containerd[1658]: 2026-01-20 15:11:09.114 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jr4nz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4e14075-1569-42bc-b38f-776a269a4fcd", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jr4nz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92ce71335d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:09.178501 containerd[1658]: 2026-01-20 15:11:09.114 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-eth0" Jan 20 15:11:09.178501 containerd[1658]: 2026-01-20 15:11:09.114 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92ce71335d4 ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-eth0" Jan 20 15:11:09.178501 containerd[1658]: 2026-01-20 15:11:09.125 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-eth0" Jan 20 15:11:09.178611 containerd[1658]: 2026-01-20 15:11:09.128 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jr4nz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4e14075-1569-42bc-b38f-776a269a4fcd", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f", Pod:"csi-node-driver-jr4nz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali92ce71335d4", MAC:"72:f4:1c:d9:50:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:09.178771 containerd[1658]: 2026-01-20 15:11:09.171 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" Namespace="calico-system" Pod="csi-node-driver-jr4nz" WorkloadEndpoint="localhost-k8s-csi--node--driver--jr4nz-eth0" Jan 20 15:11:09.181653 containerd[1658]: time="2026-01-20T15:11:09.181610258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-l6nzr,Uid:26e4b1e4-d471-4e91-bbfc-9aa64bff08f3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"249f417f1483c374a899263598b4415c85c1895af6bc89b8ebf3bfa839cae826\"" Jan 20 15:11:09.186435 containerd[1658]: time="2026-01-20T15:11:09.186387985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:11:09.197348 systemd[1]: Started cri-containerd-5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c.scope - libcontainer container 5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c. Jan 20 15:11:09.208000 audit[4618]: NETFILTER_CFG table=filter:128 family=2 entries=48 op=nft_register_chain pid=4618 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:09.208000 audit[4618]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffe32ec4f70 a2=0 a3=7ffe32ec4f5c items=0 ppid=4064 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.208000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:09.217963 containerd[1658]: time="2026-01-20T15:11:09.217900971Z" level=info msg="connecting to shim f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f" address="unix:///run/containerd/s/e73d221ecd8363ec1d1cda771546554368cfe0d9536f28962f17cfdc3a2c2467" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:09.224000 audit: BPF prog-id=225 op=LOAD Jan 20 15:11:09.225000 audit: BPF prog-id=226 op=LOAD Jan 20 15:11:09.225000 audit[4597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4580 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313930636163303365316631656130663262353030303135373238 Jan 20 15:11:09.225000 audit: BPF prog-id=226 op=UNLOAD Jan 20 15:11:09.225000 audit[4597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4580 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.225000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313930636163303365316631656130663262353030303135373238 Jan 20 15:11:09.226000 audit: BPF prog-id=227 op=LOAD Jan 20 15:11:09.226000 audit[4597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4580 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313930636163303365316631656130663262353030303135373238 Jan 20 15:11:09.226000 audit: BPF prog-id=228 op=LOAD Jan 20 15:11:09.226000 audit[4597]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4580 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313930636163303365316631656130663262353030303135373238 Jan 20 15:11:09.226000 audit: BPF prog-id=228 op=UNLOAD Jan 20 15:11:09.226000 audit[4597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4580 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313930636163303365316631656130663262353030303135373238 Jan 20 15:11:09.226000 audit: BPF prog-id=227 op=UNLOAD Jan 20 15:11:09.226000 audit[4597]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4580 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313930636163303365316631656130663262353030303135373238 Jan 20 15:11:09.226000 audit: BPF prog-id=229 op=LOAD Jan 20 15:11:09.226000 audit[4597]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4580 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564313930636163303365316631656130663262353030303135373238 Jan 20 15:11:09.231682 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:09.259265 containerd[1658]: time="2026-01-20T15:11:09.259079867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:09.266130 systemd[1]: Started cri-containerd-f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f.scope - libcontainer container f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f. Jan 20 15:11:09.276389 containerd[1658]: time="2026-01-20T15:11:09.275326091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:11:09.276389 containerd[1658]: time="2026-01-20T15:11:09.275635939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:09.281676 kubelet[2881]: E0120 15:11:09.279790 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:09.281676 kubelet[2881]: E0120 15:11:09.281226 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:09.281676 kubelet[2881]: E0120 15:11:09.281576 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9hzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-l6nzr_calico-apiserver(26e4b1e4-d471-4e91-bbfc-9aa64bff08f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:09.283552 kubelet[2881]: E0120 15:11:09.283397 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:11:09.305425 containerd[1658]: time="2026-01-20T15:11:09.305314788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lmlng,Uid:5fa31347-4392-4f2f-a0ac-7346e7069fc9,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d190cac03e1f1ea0f2b50001572883e61f28395638b83a59f90e7ce6acabb9c\"" Jan 20 15:11:09.310043 containerd[1658]: time="2026-01-20T15:11:09.310023434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 15:11:09.314000 audit: BPF prog-id=230 op=LOAD Jan 20 15:11:09.315000 audit: BPF prog-id=231 op=LOAD Jan 20 15:11:09.315000 audit[4643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4632 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639663063316538666535353339633862386261663135313566343364 Jan 20 15:11:09.316000 audit: BPF prog-id=231 op=UNLOAD Jan 20 15:11:09.316000 audit[4643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4632 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639663063316538666535353339633862386261663135313566343364 Jan 20 15:11:09.316000 audit: BPF prog-id=232 op=LOAD Jan 20 15:11:09.316000 audit[4643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4632 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639663063316538666535353339633862386261663135313566343364 Jan 20 15:11:09.317000 audit: BPF prog-id=233 op=LOAD Jan 20 15:11:09.317000 audit[4643]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4632 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639663063316538666535353339633862386261663135313566343364 Jan 20 15:11:09.317000 audit: BPF prog-id=233 op=UNLOAD Jan 20 15:11:09.317000 audit[4643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4632 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639663063316538666535353339633862386261663135313566343364 Jan 20 15:11:09.317000 audit: BPF prog-id=232 op=UNLOAD Jan 20 15:11:09.317000 audit[4643]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4632 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639663063316538666535353339633862386261663135313566343364 Jan 20 15:11:09.317000 audit: BPF prog-id=234 op=LOAD Jan 20 15:11:09.317000 audit[4643]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4632 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639663063316538666535353339633862386261663135313566343364 Jan 20 15:11:09.319683 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:09.347528 containerd[1658]: time="2026-01-20T15:11:09.347459672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jr4nz,Uid:c4e14075-1569-42bc-b38f-776a269a4fcd,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9f0c1e8fe5539c8b8baf1515f43d3b2162c5f4e88b4a2c3120e9a823c1d8e9f\"" Jan 20 15:11:09.393501 containerd[1658]: time="2026-01-20T15:11:09.393243576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:09.395748 containerd[1658]: time="2026-01-20T15:11:09.395577791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 15:11:09.395748 containerd[1658]: time="2026-01-20T15:11:09.395658303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:09.396356 kubelet[2881]: E0120 15:11:09.396284 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:11:09.396445 kubelet[2881]: E0120 15:11:09.396367 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:11:09.396817 kubelet[2881]: E0120 15:11:09.396634 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2mr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmlng_calico-system(5fa31347-4392-4f2f-a0ac-7346e7069fc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:09.397983 containerd[1658]: time="2026-01-20T15:11:09.397426882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 15:11:09.398051 kubelet[2881]: E0120 15:11:09.397793 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:11:09.466052 containerd[1658]: time="2026-01-20T15:11:09.465943745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:09.468398 containerd[1658]: time="2026-01-20T15:11:09.468312942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 15:11:09.468555 containerd[1658]: time="2026-01-20T15:11:09.468473862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:09.469252 kubelet[2881]: E0120 15:11:09.469102 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:11:09.469252 kubelet[2881]: E0120 15:11:09.469172 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:11:09.469472 kubelet[2881]: E0120 15:11:09.469383 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:09.472421 containerd[1658]: time="2026-01-20T15:11:09.472331290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 15:11:09.559592 containerd[1658]: time="2026-01-20T15:11:09.559325392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:09.574392 containerd[1658]: time="2026-01-20T15:11:09.574283976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 15:11:09.574392 containerd[1658]: time="2026-01-20T15:11:09.574344359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:09.574744 kubelet[2881]: E0120 15:11:09.574679 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:11:09.574838 kubelet[2881]: E0120 15:11:09.574760 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:11:09.575095 kubelet[2881]: E0120 15:11:09.574977 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:09.576386 kubelet[2881]: E0120 15:11:09.576299 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:11:09.636364 kubelet[2881]: E0120 15:11:09.636011 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:09.636364 kubelet[2881]: E0120 15:11:09.636095 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:09.638944 containerd[1658]: time="2026-01-20T15:11:09.638620797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqlpn,Uid:eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3,Namespace:kube-system,Attempt:0,}" Jan 20 15:11:09.638944 containerd[1658]: time="2026-01-20T15:11:09.638734174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7s8xj,Uid:62a9c2f3-dfd8-41e2-bf5d-65b847256fb1,Namespace:kube-system,Attempt:0,}" Jan 20 15:11:09.871401 systemd-networkd[1319]: calid4930c133fe: Link UP Jan 20 15:11:09.873544 systemd-networkd[1319]: calid4930c133fe: Gained carrier Jan 20 15:11:09.899117 systemd-networkd[1319]: cali732238e58d6: Gained IPv6LL Jan 20 15:11:09.900210 containerd[1658]: 2026-01-20 15:11:09.735 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0 coredns-668d6bf9bc- kube-system 62a9c2f3-dfd8-41e2-bf5d-65b847256fb1 869 0 2026-01-20 15:10:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7s8xj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid4930c133fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-" Jan 20 15:11:09.900210 containerd[1658]: 2026-01-20 15:11:09.736 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" Jan 20 15:11:09.900210 containerd[1658]: 2026-01-20 15:11:09.792 [INFO][4704] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" HandleID="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Workload="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.792 [INFO][4704] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" HandleID="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Workload="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000cd4e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7s8xj", "timestamp":"2026-01-20 15:11:09.792323505 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.792 [INFO][4704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.792 [INFO][4704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.792 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.805 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" host="localhost" Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.818 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.826 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.829 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.833 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:09.901242 containerd[1658]: 2026-01-20 15:11:09.834 [INFO][4704] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" host="localhost" Jan 20 15:11:09.901585 containerd[1658]: 2026-01-20 15:11:09.836 [INFO][4704] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8 Jan 20 15:11:09.901585 containerd[1658]: 2026-01-20 15:11:09.847 [INFO][4704] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" host="localhost" Jan 20 15:11:09.901585 containerd[1658]: 2026-01-20 15:11:09.860 [INFO][4704] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" host="localhost" Jan 20 15:11:09.901585 containerd[1658]: 2026-01-20 15:11:09.860 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" host="localhost" Jan 20 15:11:09.901585 containerd[1658]: 2026-01-20 15:11:09.860 [INFO][4704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:09.901585 containerd[1658]: 2026-01-20 15:11:09.860 [INFO][4704] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" HandleID="k8s-pod-network.61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Workload="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" Jan 20 15:11:09.901812 containerd[1658]: 2026-01-20 15:11:09.866 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"62a9c2f3-dfd8-41e2-bf5d-65b847256fb1", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7s8xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4930c133fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:09.901990 containerd[1658]: 2026-01-20 15:11:09.867 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" Jan 20 15:11:09.901990 containerd[1658]: 2026-01-20 15:11:09.867 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4930c133fe ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" Jan 20 15:11:09.901990 containerd[1658]: 2026-01-20 15:11:09.877 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" Jan 20 15:11:09.902100 containerd[1658]: 2026-01-20 15:11:09.878 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"62a9c2f3-dfd8-41e2-bf5d-65b847256fb1", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8", Pod:"coredns-668d6bf9bc-7s8xj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid4930c133fe", MAC:"9e:01:3a:aa:5c:8c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:09.902100 containerd[1658]: 2026-01-20 15:11:09.894 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7s8xj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7s8xj-eth0" Jan 20 15:11:09.910105 kubelet[2881]: E0120 15:11:09.909966 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:11:09.916228 kubelet[2881]: E0120 15:11:09.916113 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:11:09.920807 kubelet[2881]: E0120 15:11:09.920645 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:11:09.944000 audit[4736]: NETFILTER_CFG table=filter:129 family=2 entries=58 op=nft_register_chain pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:09.944000 audit[4736]: SYSCALL arch=c000003e syscall=46 success=yes exit=27304 a0=3 a1=7ffccca73b00 a2=0 a3=7ffccca73aec items=0 ppid=4064 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:09.944000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:09.989941 containerd[1658]: time="2026-01-20T15:11:09.987355701Z" level=info msg="connecting to shim 61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8" address="unix:///run/containerd/s/9c3181a48cc3b33ca73aaf6140bb46bcbecb896ba16c74cd95d9df5840075b1b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:10.021000 audit[4762]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:10.021000 audit[4762]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffced74d340 a2=0 a3=7ffced74d32c items=0 ppid=3041 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:10.025000 audit[4762]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:10.025000 audit[4762]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffced74d340 a2=0 a3=0 items=0 ppid=3041 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:10.049589 systemd[1]: Started cri-containerd-61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8.scope - libcontainer container 61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8. Jan 20 15:11:10.049000 audit[4774]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:10.049000 audit[4774]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc1ceff4d0 a2=0 a3=7ffc1ceff4bc items=0 ppid=3041 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:10.055000 audit[4774]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:10.055000 audit[4774]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc1ceff4d0 a2=0 a3=0 items=0 ppid=3041 pid=4774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:10.064289 systemd-networkd[1319]: cali02f6cbb6e15: Link UP Jan 20 15:11:10.066222 systemd-networkd[1319]: cali02f6cbb6e15: Gained carrier Jan 20 15:11:10.074000 audit: BPF prog-id=235 op=LOAD Jan 20 15:11:10.077000 audit: BPF prog-id=236 op=LOAD Jan 20 15:11:10.077000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616533323861633838663362386562356237623364613639643939 Jan 20 15:11:10.077000 audit: BPF prog-id=236 op=UNLOAD Jan 20 15:11:10.077000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616533323861633838663362386562356237623364613639643939 Jan 20 15:11:10.078000 audit: BPF prog-id=237 op=LOAD Jan 20 15:11:10.078000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616533323861633838663362386562356237623364613639643939 Jan 20 15:11:10.078000 audit: BPF prog-id=238 op=LOAD Jan 20 15:11:10.078000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616533323861633838663362386562356237623364613639643939 Jan 20 15:11:10.078000 audit: BPF prog-id=238 op=UNLOAD Jan 20 15:11:10.078000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616533323861633838663362386562356237623364613639643939 Jan 20 15:11:10.078000 audit: BPF prog-id=237 op=UNLOAD Jan 20 15:11:10.078000 audit[4758]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.078000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616533323861633838663362386562356237623364613639643939 Jan 20 15:11:10.079000 audit: BPF prog-id=239 op=LOAD Jan 20 15:11:10.079000 audit[4758]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4746 pid=4758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.079000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631616533323861633838663362386562356237623364613639643939 Jan 20 15:11:10.082038 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.750 [INFO][4676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0 coredns-668d6bf9bc- kube-system eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3 861 0 2026-01-20 15:10:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rqlpn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali02f6cbb6e15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.750 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.813 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" HandleID="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Workload="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.814 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" HandleID="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Workload="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rqlpn", "timestamp":"2026-01-20 15:11:09.813298356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.814 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.860 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.861 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.910 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.938 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.979 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:09.991 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.010 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.010 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.024 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.034 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.051 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.051 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" host="localhost" Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.052 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:10.099060 containerd[1658]: 2026-01-20 15:11:10.052 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" HandleID="k8s-pod-network.be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Workload="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" Jan 20 15:11:10.100070 containerd[1658]: 2026-01-20 15:11:10.057 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rqlpn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f6cbb6e15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:10.100070 containerd[1658]: 2026-01-20 15:11:10.058 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" Jan 20 15:11:10.100070 containerd[1658]: 2026-01-20 15:11:10.058 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02f6cbb6e15 ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" Jan 20 15:11:10.100070 containerd[1658]: 2026-01-20 15:11:10.067 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" Jan 20 15:11:10.100070 containerd[1658]: 2026-01-20 15:11:10.069 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c", Pod:"coredns-668d6bf9bc-rqlpn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali02f6cbb6e15", MAC:"5e:97:69:65:7c:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:10.100070 containerd[1658]: 2026-01-20 15:11:10.094 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" Namespace="kube-system" Pod="coredns-668d6bf9bc-rqlpn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rqlpn-eth0" Jan 20 15:11:10.149788 containerd[1658]: time="2026-01-20T15:11:10.149532774Z" level=info msg="connecting to shim be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c" address="unix:///run/containerd/s/91fa5d4e8f8f67fc3ad563cbd679fb29145535c8ad3e8f65c89640d2ffad394c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:10.149000 audit[4802]: NETFILTER_CFG table=filter:134 family=2 entries=52 op=nft_register_chain pid=4802 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:10.149000 audit[4802]: SYSCALL arch=c000003e syscall=46 success=yes exit=23908 a0=3 a1=7ffec91ccd10 a2=0 a3=7ffec91cccfc items=0 ppid=4064 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.149000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:10.177348 containerd[1658]: time="2026-01-20T15:11:10.177304333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7s8xj,Uid:62a9c2f3-dfd8-41e2-bf5d-65b847256fb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8\"" Jan 20 15:11:10.179547 kubelet[2881]: E0120 15:11:10.179515 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:10.184239 containerd[1658]: time="2026-01-20T15:11:10.184007791Z" level=info msg="CreateContainer within sandbox \"61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 15:11:10.207550 containerd[1658]: time="2026-01-20T15:11:10.207504601Z" level=info msg="Container 55f6c06008985295925dce8abc578c77ac15ba70628866f9763af79da0deebc4: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:11:10.214279 systemd[1]: Started cri-containerd-be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c.scope - libcontainer container be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c. Jan 20 15:11:10.217125 containerd[1658]: time="2026-01-20T15:11:10.217075709Z" level=info msg="CreateContainer within sandbox \"61ae328ac88f3b8eb5b7b3da69d99ab9f0e20ddcf44d23f9ce3cf1f9bf3a84b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55f6c06008985295925dce8abc578c77ac15ba70628866f9763af79da0deebc4\"" Jan 20 15:11:10.218614 containerd[1658]: time="2026-01-20T15:11:10.218445195Z" level=info msg="StartContainer for \"55f6c06008985295925dce8abc578c77ac15ba70628866f9763af79da0deebc4\"" Jan 20 15:11:10.220186 containerd[1658]: time="2026-01-20T15:11:10.220162920Z" level=info msg="connecting to shim 55f6c06008985295925dce8abc578c77ac15ba70628866f9763af79da0deebc4" address="unix:///run/containerd/s/9c3181a48cc3b33ca73aaf6140bb46bcbecb896ba16c74cd95d9df5840075b1b" protocol=ttrpc version=3 Jan 20 15:11:10.243000 audit: BPF prog-id=240 op=LOAD Jan 20 15:11:10.244000 audit: BPF prog-id=241 op=LOAD Jan 20 15:11:10.244000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4803 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265363232316639306263383536343931303431323335343462643531 Jan 20 15:11:10.245000 audit: BPF prog-id=241 op=UNLOAD Jan 20 15:11:10.245000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265363232316639306263383536343931303431323335343462643531 Jan 20 15:11:10.245000 audit: BPF prog-id=242 op=LOAD Jan 20 15:11:10.245000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4803 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265363232316639306263383536343931303431323335343462643531 Jan 20 15:11:10.245000 audit: BPF prog-id=243 op=LOAD Jan 20 15:11:10.245000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4803 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265363232316639306263383536343931303431323335343462643531 Jan 20 15:11:10.246000 audit: BPF prog-id=243 op=UNLOAD Jan 20 15:11:10.246000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265363232316639306263383536343931303431323335343462643531 Jan 20 15:11:10.246000 audit: BPF prog-id=242 op=UNLOAD Jan 20 15:11:10.246000 audit[4820]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265363232316639306263383536343931303431323335343462643531 Jan 20 15:11:10.247000 audit: BPF prog-id=244 op=LOAD Jan 20 15:11:10.247000 audit[4820]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4803 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265363232316639306263383536343931303431323335343462643531 Jan 20 15:11:10.260263 systemd[1]: Started cri-containerd-55f6c06008985295925dce8abc578c77ac15ba70628866f9763af79da0deebc4.scope - libcontainer container 55f6c06008985295925dce8abc578c77ac15ba70628866f9763af79da0deebc4. Jan 20 15:11:10.262154 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:10.280066 systemd-networkd[1319]: cali44e6549eb5d: Gained IPv6LL Jan 20 15:11:10.289000 audit: BPF prog-id=245 op=LOAD Jan 20 15:11:10.290000 audit: BPF prog-id=246 op=LOAD Jan 20 15:11:10.290000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4746 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663663303630303839383532393539323564636538616263353738 Jan 20 15:11:10.290000 audit: BPF prog-id=246 op=UNLOAD Jan 20 15:11:10.290000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663663303630303839383532393539323564636538616263353738 Jan 20 15:11:10.291000 audit: BPF prog-id=247 op=LOAD Jan 20 15:11:10.291000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4746 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663663303630303839383532393539323564636538616263353738 Jan 20 15:11:10.291000 audit: BPF prog-id=248 op=LOAD Jan 20 15:11:10.291000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4746 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663663303630303839383532393539323564636538616263353738 Jan 20 15:11:10.291000 audit: BPF prog-id=248 op=UNLOAD Jan 20 15:11:10.291000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663663303630303839383532393539323564636538616263353738 Jan 20 15:11:10.292000 audit: BPF prog-id=247 op=UNLOAD Jan 20 15:11:10.292000 audit[4834]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4746 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663663303630303839383532393539323564636538616263353738 Jan 20 15:11:10.292000 audit: BPF prog-id=249 op=LOAD Jan 20 15:11:10.292000 audit[4834]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4746 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535663663303630303839383532393539323564636538616263353738 Jan 20 15:11:10.333144 containerd[1658]: time="2026-01-20T15:11:10.333097579Z" level=info msg="StartContainer for \"55f6c06008985295925dce8abc578c77ac15ba70628866f9763af79da0deebc4\" returns successfully" Jan 20 15:11:10.336262 containerd[1658]: time="2026-01-20T15:11:10.335823426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rqlpn,Uid:eaf6daf4-3dcb-4cb3-bf6c-352fdf3b26d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c\"" Jan 20 15:11:10.340330 kubelet[2881]: E0120 15:11:10.340220 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:10.342585 containerd[1658]: time="2026-01-20T15:11:10.342481079Z" level=info msg="CreateContainer within sandbox \"be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 15:11:10.384927 containerd[1658]: time="2026-01-20T15:11:10.384672521Z" level=info msg="Container febe5a1efddade14cf0d31bcdf0e9dbae62b7b768c44cc24ef68501cbe14fd64: CDI devices from CRI Config.CDIDevices: []" Jan 20 15:11:10.394891 containerd[1658]: time="2026-01-20T15:11:10.394527715Z" level=info msg="CreateContainer within sandbox \"be6221f90bc85649104123544bd513989eebf7cb1628bf7faf446f01dce0822c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"febe5a1efddade14cf0d31bcdf0e9dbae62b7b768c44cc24ef68501cbe14fd64\"" Jan 20 15:11:10.396804 containerd[1658]: time="2026-01-20T15:11:10.396286846Z" level=info msg="StartContainer for \"febe5a1efddade14cf0d31bcdf0e9dbae62b7b768c44cc24ef68501cbe14fd64\"" Jan 20 15:11:10.399505 containerd[1658]: time="2026-01-20T15:11:10.397824395Z" level=info msg="connecting to shim febe5a1efddade14cf0d31bcdf0e9dbae62b7b768c44cc24ef68501cbe14fd64" address="unix:///run/containerd/s/91fa5d4e8f8f67fc3ad563cbd679fb29145535c8ad3e8f65c89640d2ffad394c" protocol=ttrpc version=3 Jan 20 15:11:10.437242 systemd[1]: Started cri-containerd-febe5a1efddade14cf0d31bcdf0e9dbae62b7b768c44cc24ef68501cbe14fd64.scope - libcontainer container febe5a1efddade14cf0d31bcdf0e9dbae62b7b768c44cc24ef68501cbe14fd64. Jan 20 15:11:10.488000 audit: BPF prog-id=250 op=LOAD Jan 20 15:11:10.489000 audit: BPF prog-id=251 op=LOAD Jan 20 15:11:10.489000 audit[4876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4803 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665626535613165666464616465313463663064333162636466306539 Jan 20 15:11:10.490000 audit: BPF prog-id=251 op=UNLOAD Jan 20 15:11:10.490000 audit[4876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665626535613165666464616465313463663064333162636466306539 Jan 20 15:11:10.490000 audit: BPF prog-id=252 op=LOAD Jan 20 15:11:10.490000 audit[4876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4803 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665626535613165666464616465313463663064333162636466306539 Jan 20 15:11:10.491000 audit: BPF prog-id=253 op=LOAD Jan 20 15:11:10.491000 audit[4876]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4803 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665626535613165666464616465313463663064333162636466306539 Jan 20 15:11:10.491000 audit: BPF prog-id=253 op=UNLOAD Jan 20 15:11:10.491000 audit[4876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665626535613165666464616465313463663064333162636466306539 Jan 20 15:11:10.491000 audit: BPF prog-id=252 op=UNLOAD Jan 20 15:11:10.491000 audit[4876]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4803 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665626535613165666464616465313463663064333162636466306539 Jan 20 15:11:10.491000 audit: BPF prog-id=254 op=LOAD Jan 20 15:11:10.491000 audit[4876]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4803 pid=4876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665626535613165666464616465313463663064333162636466306539 Jan 20 15:11:10.535775 systemd-networkd[1319]: cali92ce71335d4: Gained IPv6LL Jan 20 15:11:10.550958 containerd[1658]: time="2026-01-20T15:11:10.550804112Z" level=info msg="StartContainer for \"febe5a1efddade14cf0d31bcdf0e9dbae62b7b768c44cc24ef68501cbe14fd64\" returns successfully" Jan 20 15:11:10.633406 containerd[1658]: time="2026-01-20T15:11:10.633342201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-bggk4,Uid:5b2bf4f6-7ff7-4a4a-b602-713112aeec36,Namespace:calico-apiserver,Attempt:0,}" Jan 20 15:11:10.884618 systemd-networkd[1319]: calic0d35f51829: Link UP Jan 20 15:11:10.886343 systemd-networkd[1319]: calic0d35f51829: Gained carrier Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.742 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0 calico-apiserver-6848f96b7- calico-apiserver 5b2bf4f6-7ff7-4a4a-b602-713112aeec36 870 0 2026-01-20 15:10:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6848f96b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6848f96b7-bggk4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic0d35f51829 [] [] }} ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.743 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.812 [INFO][4929] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" HandleID="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Workload="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.812 [INFO][4929] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" HandleID="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Workload="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6848f96b7-bggk4", "timestamp":"2026-01-20 15:11:10.812446319 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.812 [INFO][4929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.812 [INFO][4929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.812 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.824 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.831 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.838 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.841 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.844 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.844 [INFO][4929] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.846 [INFO][4929] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.852 [INFO][4929] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.867 [INFO][4929] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.869 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" host="localhost" Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.869 [INFO][4929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 15:11:10.901785 containerd[1658]: 2026-01-20 15:11:10.869 [INFO][4929] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" HandleID="k8s-pod-network.89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Workload="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" Jan 20 15:11:10.903223 containerd[1658]: 2026-01-20 15:11:10.881 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0", GenerateName:"calico-apiserver-6848f96b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b2bf4f6-7ff7-4a4a-b602-713112aeec36", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6848f96b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6848f96b7-bggk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0d35f51829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:10.903223 containerd[1658]: 2026-01-20 15:11:10.882 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" Jan 20 15:11:10.903223 containerd[1658]: 2026-01-20 15:11:10.882 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0d35f51829 ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" Jan 20 15:11:10.903223 containerd[1658]: 2026-01-20 15:11:10.885 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" Jan 20 15:11:10.903223 containerd[1658]: 2026-01-20 15:11:10.886 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0", GenerateName:"calico-apiserver-6848f96b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b2bf4f6-7ff7-4a4a-b602-713112aeec36", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 15, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6848f96b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e", Pod:"calico-apiserver-6848f96b7-bggk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic0d35f51829", MAC:"0a:03:c9:1a:f8:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 15:11:10.903223 containerd[1658]: 2026-01-20 15:11:10.897 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" Namespace="calico-apiserver" Pod="calico-apiserver-6848f96b7-bggk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6848f96b7--bggk4-eth0" Jan 20 15:11:10.941910 kubelet[2881]: E0120 15:11:10.941102 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:10.947364 containerd[1658]: time="2026-01-20T15:11:10.947320485Z" level=info msg="connecting to shim 89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e" address="unix:///run/containerd/s/aa9fba292c51f305808c1166e2dd14f88452b58b905f5d38f98b5add40fee1e1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 15:11:10.961203 kubelet[2881]: E0120 15:11:10.958671 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:10.963133 kubelet[2881]: E0120 15:11:10.962515 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:11:10.972153 kubelet[2881]: E0120 15:11:10.972116 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:11:10.975302 kubelet[2881]: E0120 15:11:10.975241 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:11:10.976000 audit[4959]: NETFILTER_CFG table=filter:135 family=2 entries=61 op=nft_register_chain pid=4959 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 15:11:10.976000 audit[4959]: SYSCALL arch=c000003e syscall=46 success=yes exit=29016 a0=3 a1=7ffcdad0e1c0 a2=0 a3=7ffcdad0e1ac items=0 ppid=4064 pid=4959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:10.976000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 15:11:10.992460 kubelet[2881]: I0120 15:11:10.992254 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rqlpn" podStartSLOduration=39.992232366 podStartE2EDuration="39.992232366s" podCreationTimestamp="2026-01-20 15:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:11:10.990259234 +0000 UTC m=+44.518689107" watchObservedRunningTime="2026-01-20 15:11:10.992232366 +0000 UTC m=+44.520662259" Jan 20 15:11:11.015099 systemd[1]: Started cri-containerd-89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e.scope - libcontainer container 89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e. Jan 20 15:11:11.032000 audit[4981]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4981 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:11.032000 audit[4981]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff424df410 a2=0 a3=7fff424df3fc items=0 ppid=3041 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:11.038000 audit[4981]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4981 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:11.038000 audit[4981]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff424df410 a2=0 a3=0 items=0 ppid=3041 pid=4981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:11.043000 audit: BPF prog-id=255 op=LOAD Jan 20 15:11:11.044000 audit: BPF prog-id=256 op=LOAD Jan 20 15:11:11.044000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4952 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839353334666638383032663831373239313330353632336136393331 Jan 20 15:11:11.044000 audit: BPF prog-id=256 op=UNLOAD Jan 20 15:11:11.044000 audit[4966]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4952 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839353334666638383032663831373239313330353632336136393331 Jan 20 15:11:11.044000 audit: BPF prog-id=257 op=LOAD Jan 20 15:11:11.044000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4952 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839353334666638383032663831373239313330353632336136393331 Jan 20 15:11:11.044000 audit: BPF prog-id=258 op=LOAD Jan 20 15:11:11.044000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4952 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839353334666638383032663831373239313330353632336136393331 Jan 20 15:11:11.044000 audit: BPF prog-id=258 op=UNLOAD Jan 20 15:11:11.044000 audit[4966]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4952 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839353334666638383032663831373239313330353632336136393331 Jan 20 15:11:11.044000 audit: BPF prog-id=257 op=UNLOAD Jan 20 15:11:11.044000 audit[4966]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4952 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839353334666638383032663831373239313330353632336136393331 Jan 20 15:11:11.044000 audit: BPF prog-id=259 op=LOAD Jan 20 15:11:11.044000 audit[4966]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4952 pid=4966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839353334666638383032663831373239313330353632336136393331 Jan 20 15:11:11.047367 kubelet[2881]: I0120 15:11:11.046443 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7s8xj" podStartSLOduration=40.046422278 podStartE2EDuration="40.046422278s" podCreationTimestamp="2026-01-20 15:10:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 15:11:11.042404581 +0000 UTC m=+44.570834454" watchObservedRunningTime="2026-01-20 15:11:11.046422278 +0000 UTC m=+44.574852612" Jan 20 15:11:11.066823 systemd-resolved[1287]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 15:11:11.160520 containerd[1658]: time="2026-01-20T15:11:11.160265021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6848f96b7-bggk4,Uid:5b2bf4f6-7ff7-4a4a-b602-713112aeec36,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"89534ff8802f817291305623a69314be0e66c11603a1e594a2d705a5ce30ca8e\"" Jan 20 15:11:11.163612 containerd[1658]: time="2026-01-20T15:11:11.163581098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:11:11.176160 systemd-networkd[1319]: cali02f6cbb6e15: Gained IPv6LL Jan 20 15:11:11.176952 systemd-networkd[1319]: calid4930c133fe: Gained IPv6LL Jan 20 15:11:11.240334 containerd[1658]: time="2026-01-20T15:11:11.240152225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:11.242775 containerd[1658]: time="2026-01-20T15:11:11.242641491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:11:11.242932 containerd[1658]: time="2026-01-20T15:11:11.242783785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:11.243095 kubelet[2881]: E0120 15:11:11.242994 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:11.243095 kubelet[2881]: E0120 15:11:11.243063 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:11.243355 kubelet[2881]: E0120 15:11:11.243226 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9ccp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-bggk4_calico-apiserver(5b2bf4f6-7ff7-4a4a-b602-713112aeec36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:11.245044 kubelet[2881]: E0120 15:11:11.244985 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:11:11.957206 kubelet[2881]: E0120 15:11:11.957028 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:11.958553 kubelet[2881]: E0120 15:11:11.957985 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:11.958913 kubelet[2881]: E0120 15:11:11.958818 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:11:11.998000 audit[4995]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:12.002219 kernel: kauditd_printk_skb: 227 callbacks suppressed Jan 20 15:11:12.002361 kernel: audit: type=1325 audit(1768921871.998:750): table=filter:138 family=2 entries=17 op=nft_register_rule pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:12.007155 systemd-networkd[1319]: calic0d35f51829: Gained IPv6LL Jan 20 15:11:11.998000 audit[4995]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed4d16730 a2=0 a3=7ffed4d1671c items=0 ppid=3041 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:11.998000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:12.010973 kernel: audit: type=1300 audit(1768921871.998:750): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed4d16730 a2=0 a3=7ffed4d1671c items=0 ppid=3041 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:12.011040 kernel: audit: type=1327 audit(1768921871.998:750): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:12.041000 audit[4995]: NETFILTER_CFG table=nat:139 family=2 entries=47 op=nft_register_chain pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:12.041000 audit[4995]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffed4d16730 a2=0 a3=7ffed4d1671c items=0 ppid=3041 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:12.067447 kernel: audit: type=1325 audit(1768921872.041:751): table=nat:139 family=2 entries=47 op=nft_register_chain pid=4995 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:11:12.067575 kernel: audit: type=1300 audit(1768921872.041:751): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffed4d16730 a2=0 a3=7ffed4d1671c items=0 ppid=3041 pid=4995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:11:12.067624 kernel: audit: type=1327 audit(1768921872.041:751): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:12.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:11:12.959452 kubelet[2881]: E0120 15:11:12.959296 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:12.959452 kubelet[2881]: E0120 15:11:12.959301 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:12.961170 kubelet[2881]: E0120 15:11:12.960775 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:11:16.634562 containerd[1658]: time="2026-01-20T15:11:16.634431772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 15:11:16.715380 containerd[1658]: time="2026-01-20T15:11:16.715327428Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:16.717132 containerd[1658]: time="2026-01-20T15:11:16.717033135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 15:11:16.717132 containerd[1658]: time="2026-01-20T15:11:16.717102625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:16.717837 kubelet[2881]: E0120 15:11:16.717476 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:11:16.717837 kubelet[2881]: E0120 15:11:16.717544 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:11:16.717837 kubelet[2881]: E0120 15:11:16.717771 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a8dee0b7dc7f4ac08b9f27fb940bf054,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:16.720144 containerd[1658]: time="2026-01-20T15:11:16.720120608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 15:11:16.809581 containerd[1658]: time="2026-01-20T15:11:16.809419636Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:16.811348 containerd[1658]: time="2026-01-20T15:11:16.811218436Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 15:11:16.811462 containerd[1658]: time="2026-01-20T15:11:16.811352512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:16.811954 kubelet[2881]: E0120 15:11:16.811765 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:11:16.812310 kubelet[2881]: E0120 15:11:16.811962 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:11:16.812310 kubelet[2881]: E0120 15:11:16.812091 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:16.813354 kubelet[2881]: E0120 15:11:16.813292 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:11:21.633427 containerd[1658]: time="2026-01-20T15:11:21.633094814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 15:11:21.743913 containerd[1658]: time="2026-01-20T15:11:21.743784457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:21.745632 containerd[1658]: time="2026-01-20T15:11:21.745534773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 15:11:21.745632 containerd[1658]: time="2026-01-20T15:11:21.745579567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:21.745906 kubelet[2881]: E0120 15:11:21.745773 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:11:21.745906 kubelet[2881]: E0120 15:11:21.745817 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:11:21.746430 containerd[1658]: time="2026-01-20T15:11:21.746157756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 15:11:21.747101 kubelet[2881]: E0120 15:11:21.746958 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztvjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bf79bffbc-qt64q_calico-system(746e4480-dd89-4ee6-ba05-3e214024a83b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:21.748289 kubelet[2881]: E0120 15:11:21.748257 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:11:21.848929 containerd[1658]: time="2026-01-20T15:11:21.848738929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:21.850676 containerd[1658]: time="2026-01-20T15:11:21.850575296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 15:11:21.850793 containerd[1658]: time="2026-01-20T15:11:21.850683205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:21.851047 kubelet[2881]: E0120 15:11:21.850946 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:11:21.851047 kubelet[2881]: E0120 15:11:21.851033 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:11:21.851241 kubelet[2881]: E0120 15:11:21.851167 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:21.856821 containerd[1658]: time="2026-01-20T15:11:21.856452344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 15:11:21.955981 containerd[1658]: time="2026-01-20T15:11:21.955607283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:21.959578 containerd[1658]: time="2026-01-20T15:11:21.959368398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 15:11:21.959578 containerd[1658]: time="2026-01-20T15:11:21.959455139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:21.960469 kubelet[2881]: E0120 15:11:21.959826 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:11:21.960469 kubelet[2881]: E0120 15:11:21.960368 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:11:21.960984 kubelet[2881]: E0120 15:11:21.960611 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:21.962354 kubelet[2881]: E0120 15:11:21.962010 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:11:22.633749 containerd[1658]: time="2026-01-20T15:11:22.633157197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 15:11:22.728571 containerd[1658]: time="2026-01-20T15:11:22.728507030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:22.730573 containerd[1658]: time="2026-01-20T15:11:22.730415409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 15:11:22.730573 containerd[1658]: time="2026-01-20T15:11:22.730544848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:22.730818 kubelet[2881]: E0120 15:11:22.730780 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:11:22.730971 kubelet[2881]: E0120 15:11:22.730820 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:11:22.731315 containerd[1658]: time="2026-01-20T15:11:22.731214146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:11:22.731669 kubelet[2881]: E0120 15:11:22.731402 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2mr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmlng_calico-system(5fa31347-4392-4f2f-a0ac-7346e7069fc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:22.732783 kubelet[2881]: E0120 15:11:22.732660 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:11:22.811262 containerd[1658]: time="2026-01-20T15:11:22.811166356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:22.813047 containerd[1658]: time="2026-01-20T15:11:22.812961756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:11:22.813216 containerd[1658]: time="2026-01-20T15:11:22.813065845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:22.813327 kubelet[2881]: E0120 15:11:22.813276 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:22.813327 kubelet[2881]: E0120 15:11:22.813321 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:22.813670 kubelet[2881]: E0120 15:11:22.813502 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9hzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-l6nzr_calico-apiserver(26e4b1e4-d471-4e91-bbfc-9aa64bff08f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:22.815125 kubelet[2881]: E0120 15:11:22.814918 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:11:28.633339 containerd[1658]: time="2026-01-20T15:11:28.633203713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:11:28.724831 containerd[1658]: time="2026-01-20T15:11:28.724646620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:28.726417 containerd[1658]: time="2026-01-20T15:11:28.726359662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:11:28.726495 containerd[1658]: time="2026-01-20T15:11:28.726443404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:28.726808 kubelet[2881]: E0120 15:11:28.726765 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:28.726808 kubelet[2881]: E0120 15:11:28.726808 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:28.727335 kubelet[2881]: E0120 15:11:28.726975 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9ccp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-bggk4_calico-apiserver(5b2bf4f6-7ff7-4a4a-b602-713112aeec36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:28.728270 kubelet[2881]: E0120 15:11:28.728192 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:11:31.634215 kubelet[2881]: E0120 15:11:31.633820 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:11:32.714945 kubelet[2881]: E0120 15:11:32.714399 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:34.632916 kubelet[2881]: E0120 15:11:34.632779 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:11:36.633964 kubelet[2881]: E0120 15:11:36.633818 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:11:36.635774 kubelet[2881]: E0120 15:11:36.635648 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:11:36.635774 kubelet[2881]: E0120 15:11:36.635676 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:11:39.634330 kubelet[2881]: E0120 15:11:39.634223 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:40.658561 kubelet[2881]: E0120 15:11:40.658434 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:11:45.632330 kubelet[2881]: E0120 15:11:45.632155 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:45.633313 kubelet[2881]: E0120 15:11:45.632373 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:46.636403 containerd[1658]: time="2026-01-20T15:11:46.636289077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 15:11:46.738323 containerd[1658]: time="2026-01-20T15:11:46.738125314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:46.740255 containerd[1658]: time="2026-01-20T15:11:46.740088881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 15:11:46.740255 containerd[1658]: time="2026-01-20T15:11:46.740124184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:46.740704 kubelet[2881]: E0120 15:11:46.740607 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:11:46.740704 kubelet[2881]: E0120 15:11:46.740695 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:11:46.741558 kubelet[2881]: E0120 15:11:46.741114 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a8dee0b7dc7f4ac08b9f27fb940bf054,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:46.742385 containerd[1658]: time="2026-01-20T15:11:46.742244455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 15:11:46.817533 containerd[1658]: time="2026-01-20T15:11:46.817429560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:46.819406 containerd[1658]: time="2026-01-20T15:11:46.819258423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 15:11:46.819406 containerd[1658]: time="2026-01-20T15:11:46.819371113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:46.819752 kubelet[2881]: E0120 15:11:46.819686 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:11:46.819964 kubelet[2881]: E0120 15:11:46.819923 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:11:46.820286 kubelet[2881]: E0120 15:11:46.820215 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztvjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bf79bffbc-qt64q_calico-system(746e4480-dd89-4ee6-ba05-3e214024a83b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:46.820803 containerd[1658]: time="2026-01-20T15:11:46.820623451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 15:11:46.822028 kubelet[2881]: E0120 15:11:46.821996 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:11:46.897058 containerd[1658]: time="2026-01-20T15:11:46.896827237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:46.898587 containerd[1658]: time="2026-01-20T15:11:46.898492135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 15:11:46.898587 containerd[1658]: time="2026-01-20T15:11:46.898538770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:46.898901 kubelet[2881]: E0120 15:11:46.898794 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:11:46.898973 kubelet[2881]: E0120 15:11:46.898900 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:11:46.899137 kubelet[2881]: E0120 15:11:46.899040 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:46.900492 kubelet[2881]: E0120 15:11:46.900375 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:11:47.634398 containerd[1658]: time="2026-01-20T15:11:47.634304115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:11:47.734927 containerd[1658]: time="2026-01-20T15:11:47.734407299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:47.737497 containerd[1658]: time="2026-01-20T15:11:47.737402802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:11:47.737611 containerd[1658]: time="2026-01-20T15:11:47.737548022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:47.739064 kubelet[2881]: E0120 15:11:47.739019 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:47.739258 kubelet[2881]: E0120 15:11:47.739230 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:47.740831 containerd[1658]: time="2026-01-20T15:11:47.740463487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 15:11:47.740995 kubelet[2881]: E0120 15:11:47.740770 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9hzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-l6nzr_calico-apiserver(26e4b1e4-d471-4e91-bbfc-9aa64bff08f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:47.742075 kubelet[2881]: E0120 15:11:47.742008 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:11:47.820665 containerd[1658]: time="2026-01-20T15:11:47.820527944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:47.822675 containerd[1658]: time="2026-01-20T15:11:47.822527162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 15:11:47.822675 containerd[1658]: time="2026-01-20T15:11:47.822627919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:47.822999 kubelet[2881]: E0120 15:11:47.822922 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:11:47.822999 kubelet[2881]: E0120 15:11:47.822984 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:11:47.823408 kubelet[2881]: E0120 15:11:47.823295 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2mr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmlng_calico-system(5fa31347-4392-4f2f-a0ac-7346e7069fc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:47.824055 containerd[1658]: time="2026-01-20T15:11:47.823633927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 15:11:47.824670 kubelet[2881]: E0120 15:11:47.824636 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:11:47.971992 containerd[1658]: time="2026-01-20T15:11:47.971541784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:47.973833 containerd[1658]: time="2026-01-20T15:11:47.973694663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 15:11:47.973997 containerd[1658]: time="2026-01-20T15:11:47.973953496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:47.974231 kubelet[2881]: E0120 15:11:47.974166 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:11:47.974231 kubelet[2881]: E0120 15:11:47.974224 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:11:47.974640 kubelet[2881]: E0120 15:11:47.974375 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:47.977174 containerd[1658]: time="2026-01-20T15:11:47.977037814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 15:11:48.057517 containerd[1658]: time="2026-01-20T15:11:48.057246110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:48.069064 containerd[1658]: time="2026-01-20T15:11:48.068643467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 15:11:48.069064 containerd[1658]: time="2026-01-20T15:11:48.068912418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:48.069680 kubelet[2881]: E0120 15:11:48.069295 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:11:48.069838 kubelet[2881]: E0120 15:11:48.069699 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:11:48.071546 kubelet[2881]: E0120 15:11:48.070564 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:48.072363 kubelet[2881]: E0120 15:11:48.072218 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:11:54.635610 kubelet[2881]: E0120 15:11:54.635515 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:11:54.638123 containerd[1658]: time="2026-01-20T15:11:54.637922750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:11:54.708900 containerd[1658]: time="2026-01-20T15:11:54.708696894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:11:54.710768 containerd[1658]: time="2026-01-20T15:11:54.710640275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:11:54.711020 containerd[1658]: time="2026-01-20T15:11:54.710689026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:11:54.711118 kubelet[2881]: E0120 15:11:54.711051 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:54.711184 kubelet[2881]: E0120 15:11:54.711116 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:11:54.711347 kubelet[2881]: E0120 15:11:54.711265 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9ccp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-bggk4_calico-apiserver(5b2bf4f6-7ff7-4a4a-b602-713112aeec36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:11:54.712775 kubelet[2881]: E0120 15:11:54.712702 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:11:59.633456 kubelet[2881]: E0120 15:11:59.633389 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:11:59.634791 kubelet[2881]: E0120 15:11:59.634638 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:12:01.633150 kubelet[2881]: E0120 15:12:01.633008 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:12:02.637115 kubelet[2881]: E0120 15:12:02.637036 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:12:03.635173 kubelet[2881]: E0120 15:12:03.635049 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:12:08.634238 kubelet[2881]: E0120 15:12:08.633922 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:12:09.632224 kubelet[2881]: E0120 15:12:09.632126 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:12:10.634897 kubelet[2881]: E0120 15:12:10.634739 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:12:11.634695 kubelet[2881]: E0120 15:12:11.634529 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:12:13.634965 kubelet[2881]: E0120 15:12:13.634558 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:12:13.636697 kubelet[2881]: E0120 15:12:13.634796 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:12:13.636697 kubelet[2881]: E0120 15:12:13.634147 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:12:15.136107 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:55710.service - OpenSSH per-connection server daemon (10.0.0.1:55710). Jan 20 15:12:15.147903 kernel: audit: type=1130 audit(1768921935.135:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:55710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:15.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:55710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:15.279000 audit[5094]: USER_ACCT pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.281177 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 55710 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:15.285812 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:15.295551 systemd-logind[1631]: New session 9 of user core. Jan 20 15:12:15.282000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.308163 kernel: audit: type=1101 audit(1768921935.279:753): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.308246 kernel: audit: type=1103 audit(1768921935.282:754): pid=5094 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.315953 kernel: audit: type=1006 audit(1768921935.283:755): pid=5094 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 20 15:12:15.283000 audit[5094]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2bff8c30 a2=3 a3=0 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:15.328129 kernel: audit: type=1300 audit(1768921935.283:755): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2bff8c30 a2=3 a3=0 items=0 ppid=1 pid=5094 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:15.328301 kernel: audit: type=1327 audit(1768921935.283:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:15.283000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:15.338391 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 15:12:15.345000 audit[5094]: USER_START pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.348000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.389294 kernel: audit: type=1105 audit(1768921935.345:756): pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.389414 kernel: audit: type=1103 audit(1768921935.348:757): pid=5099 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.588377 sshd[5099]: Connection closed by 10.0.0.1 port 55710 Jan 20 15:12:15.587488 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:15.589000 audit[5094]: USER_END pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.594689 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:55710.service: Deactivated successfully. Jan 20 15:12:15.597370 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 15:12:15.600154 systemd-logind[1631]: Session 9 logged out. Waiting for processes to exit. Jan 20 15:12:15.601686 systemd-logind[1631]: Removed session 9. Jan 20 15:12:15.589000 audit[5094]: CRED_DISP pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.613472 kernel: audit: type=1106 audit(1768921935.589:758): pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.613790 kernel: audit: type=1104 audit(1768921935.589:759): pid=5094 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:15.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.116:22-10.0.0.1:55710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:18.634513 kubelet[2881]: E0120 15:12:18.633804 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:12:20.610453 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:55722.service - OpenSSH per-connection server daemon (10.0.0.1:55722). Jan 20 15:12:20.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:55722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:20.614977 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:20.615087 kernel: audit: type=1130 audit(1768921940.609:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:55722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:20.714544 kernel: audit: type=1101 audit(1768921940.700:762): pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.700000 audit[5114]: USER_ACCT pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.705285 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:20.715124 sshd[5114]: Accepted publickey for core from 10.0.0.1 port 55722 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:20.702000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.717001 systemd-logind[1631]: New session 10 of user core. Jan 20 15:12:20.736220 kernel: audit: type=1103 audit(1768921940.702:763): pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.736325 kernel: audit: type=1006 audit(1768921940.702:764): pid=5114 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 15:12:20.702000 audit[5114]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe96fa6920 a2=3 a3=0 items=0 ppid=1 pid=5114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:20.750911 kernel: audit: type=1300 audit(1768921940.702:764): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe96fa6920 a2=3 a3=0 items=0 ppid=1 pid=5114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:20.751181 kernel: audit: type=1327 audit(1768921940.702:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:20.702000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:20.761377 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 15:12:20.780000 audit[5114]: USER_START pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.783000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.808716 kernel: audit: type=1105 audit(1768921940.780:765): pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.808936 kernel: audit: type=1103 audit(1768921940.783:766): pid=5118 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.923956 sshd[5118]: Connection closed by 10.0.0.1 port 55722 Jan 20 15:12:20.925137 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:20.927000 audit[5114]: USER_END pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.933123 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:55722.service: Deactivated successfully. Jan 20 15:12:20.936713 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 15:12:20.939390 systemd-logind[1631]: Session 10 logged out. Waiting for processes to exit. Jan 20 15:12:20.944048 systemd-logind[1631]: Removed session 10. Jan 20 15:12:20.927000 audit[5114]: CRED_DISP pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.978331 kernel: audit: type=1106 audit(1768921940.927:767): pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.978433 kernel: audit: type=1104 audit(1768921940.927:768): pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:20.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.116:22-10.0.0.1:55722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:22.636240 kubelet[2881]: E0120 15:12:22.636174 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:12:22.637546 kubelet[2881]: E0120 15:12:22.636726 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:12:24.637488 kubelet[2881]: E0120 15:12:24.637409 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:12:24.639235 kubelet[2881]: E0120 15:12:24.638318 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:12:25.937589 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:55492.service - OpenSSH per-connection server daemon (10.0.0.1:55492). Jan 20 15:12:25.944046 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:25.944121 kernel: audit: type=1130 audit(1768921945.936:770): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:55492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:25.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:55492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:26.033000 audit[5138]: USER_ACCT pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.035084 sshd[5138]: Accepted publickey for core from 10.0.0.1 port 55492 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:26.038014 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:26.048405 kernel: audit: type=1101 audit(1768921946.033:771): pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.048474 kernel: audit: type=1103 audit(1768921946.035:772): pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.035000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.054622 systemd-logind[1631]: New session 11 of user core. Jan 20 15:12:26.089912 kernel: audit: type=1006 audit(1768921946.035:773): pid=5138 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 15:12:26.090001 kernel: audit: type=1300 audit(1768921946.035:773): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdb3ab6c0 a2=3 a3=0 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:26.035000 audit[5138]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffdb3ab6c0 a2=3 a3=0 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:26.101206 kernel: audit: type=1327 audit(1768921946.035:773): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:26.035000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:26.107940 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 15:12:26.116000 audit[5138]: USER_START pid=5138 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.132942 kernel: audit: type=1105 audit(1768921946.116:774): pid=5138 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.123000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.152985 kernel: audit: type=1103 audit(1768921946.123:775): pid=5142 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.296292 sshd[5142]: Connection closed by 10.0.0.1 port 55492 Jan 20 15:12:26.298805 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:26.299000 audit[5138]: USER_END pid=5138 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.304701 systemd-logind[1631]: Session 11 logged out. Waiting for processes to exit. Jan 20 15:12:26.304716 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:55492.service: Deactivated successfully. Jan 20 15:12:26.308972 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 15:12:26.313306 systemd-logind[1631]: Removed session 11. Jan 20 15:12:26.299000 audit[5138]: CRED_DISP pid=5138 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.328618 kernel: audit: type=1106 audit(1768921946.299:776): pid=5138 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.328712 kernel: audit: type=1104 audit(1768921946.299:777): pid=5138 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:26.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.116:22-10.0.0.1:55492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:26.634370 kubelet[2881]: E0120 15:12:26.634228 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:12:31.313211 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:55496.service - OpenSSH per-connection server daemon (10.0.0.1:55496). Jan 20 15:12:31.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:55496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:31.316531 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:31.316607 kernel: audit: type=1130 audit(1768921951.312:779): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:55496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:31.411000 audit[5160]: USER_ACCT pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.415505 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:31.419477 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 55496 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:31.426547 systemd-logind[1631]: New session 12 of user core. Jan 20 15:12:31.411000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.442064 kernel: audit: type=1101 audit(1768921951.411:780): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.442145 kernel: audit: type=1103 audit(1768921951.411:781): pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.449462 kernel: audit: type=1006 audit(1768921951.411:782): pid=5160 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 15:12:31.449530 kernel: audit: type=1300 audit(1768921951.411:782): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6af0a0a0 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:31.411000 audit[5160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6af0a0a0 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:31.411000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:31.476400 kernel: audit: type=1327 audit(1768921951.411:782): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:31.478319 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 15:12:31.483000 audit[5160]: USER_START pid=5160 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.486000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.510810 kernel: audit: type=1105 audit(1768921951.483:783): pid=5160 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.511109 kernel: audit: type=1103 audit(1768921951.486:784): pid=5164 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.626222 sshd[5164]: Connection closed by 10.0.0.1 port 55496 Jan 20 15:12:31.626009 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:31.626000 audit[5160]: USER_END pid=5160 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.642147 containerd[1658]: time="2026-01-20T15:12:31.636527901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 15:12:31.631721 systemd-logind[1631]: Session 12 logged out. Waiting for processes to exit. Jan 20 15:12:31.632546 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:55496.service: Deactivated successfully. Jan 20 15:12:31.643015 kernel: audit: type=1106 audit(1768921951.626:785): pid=5160 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.635518 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 15:12:31.641181 systemd-logind[1631]: Removed session 12. Jan 20 15:12:31.626000 audit[5160]: CRED_DISP pid=5160 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.116:22-10.0.0.1:55496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:31.661126 kernel: audit: type=1104 audit(1768921951.626:786): pid=5160 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:31.756554 containerd[1658]: time="2026-01-20T15:12:31.756476830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:31.765176 containerd[1658]: time="2026-01-20T15:12:31.765096322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 15:12:31.765176 containerd[1658]: time="2026-01-20T15:12:31.765204964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:31.765528 kubelet[2881]: E0120 15:12:31.765382 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:12:31.765528 kubelet[2881]: E0120 15:12:31.765427 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:12:31.766200 kubelet[2881]: E0120 15:12:31.765545 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2mr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmlng_calico-system(5fa31347-4392-4f2f-a0ac-7346e7069fc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:31.766975 kubelet[2881]: E0120 15:12:31.766825 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:12:34.632472 kubelet[2881]: E0120 15:12:34.632344 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:12:36.645769 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:51194.service - OpenSSH per-connection server daemon (10.0.0.1:51194). Jan 20 15:12:36.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.116:22-10.0.0.1:51194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:36.649950 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:36.650211 kernel: audit: type=1130 audit(1768921956.645:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.116:22-10.0.0.1:51194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:36.775000 audit[5227]: USER_ACCT pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:36.776458 sshd[5227]: Accepted publickey for core from 10.0.0.1 port 51194 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:36.786000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:36.789052 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:36.798326 kernel: audit: type=1101 audit(1768921956.775:789): pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:36.798412 kernel: audit: type=1103 audit(1768921956.786:790): pid=5227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:36.798974 kernel: audit: type=1006 audit(1768921956.786:791): pid=5227 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 15:12:36.786000 audit[5227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4eed4620 a2=3 a3=0 items=0 ppid=1 pid=5227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:36.813019 systemd-logind[1631]: New session 13 of user core. Jan 20 15:12:36.816516 kernel: audit: type=1300 audit(1768921956.786:791): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4eed4620 a2=3 a3=0 items=0 ppid=1 pid=5227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:36.816564 kernel: audit: type=1327 audit(1768921956.786:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:36.786000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:36.832096 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 15:12:36.835000 audit[5227]: USER_START pid=5227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:36.837000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:36.875057 kernel: audit: type=1105 audit(1768921956.835:792): pid=5227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:36.875177 kernel: audit: type=1103 audit(1768921956.837:793): pid=5231 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:37.013647 sshd[5231]: Connection closed by 10.0.0.1 port 51194 Jan 20 15:12:37.015274 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:37.016000 audit[5227]: USER_END pid=5227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:37.016000 audit[5227]: CRED_DISP pid=5227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:37.033399 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:51194.service: Deactivated successfully. Jan 20 15:12:37.041382 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 15:12:37.045602 kernel: audit: type=1106 audit(1768921957.016:794): pid=5227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:37.045677 kernel: audit: type=1104 audit(1768921957.016:795): pid=5227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:37.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.116:22-10.0.0.1:51194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:37.044973 systemd-logind[1631]: Session 13 logged out. Waiting for processes to exit. Jan 20 15:12:37.047558 systemd-logind[1631]: Removed session 13. Jan 20 15:12:37.636830 containerd[1658]: time="2026-01-20T15:12:37.636683505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 15:12:37.720169 containerd[1658]: time="2026-01-20T15:12:37.720050247Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:37.722200 containerd[1658]: time="2026-01-20T15:12:37.722084083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 15:12:37.722200 containerd[1658]: time="2026-01-20T15:12:37.722178109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:37.723022 kubelet[2881]: E0120 15:12:37.722726 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:12:37.723580 kubelet[2881]: E0120 15:12:37.723038 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:12:37.723745 containerd[1658]: time="2026-01-20T15:12:37.723445098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:12:37.724261 kubelet[2881]: E0120 15:12:37.724062 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:37.791152 containerd[1658]: time="2026-01-20T15:12:37.791100088Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:37.793701 containerd[1658]: time="2026-01-20T15:12:37.793495534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:12:37.793923 containerd[1658]: time="2026-01-20T15:12:37.793710455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:37.794297 kubelet[2881]: E0120 15:12:37.794184 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:12:37.795076 kubelet[2881]: E0120 15:12:37.794450 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:12:37.795739 kubelet[2881]: E0120 15:12:37.795506 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9ccp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-bggk4_calico-apiserver(5b2bf4f6-7ff7-4a4a-b602-713112aeec36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:37.797199 kubelet[2881]: E0120 15:12:37.797048 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:12:37.797498 containerd[1658]: time="2026-01-20T15:12:37.797427225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 15:12:37.881149 containerd[1658]: time="2026-01-20T15:12:37.881002081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:37.883215 containerd[1658]: time="2026-01-20T15:12:37.883145986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 15:12:37.883570 containerd[1658]: time="2026-01-20T15:12:37.883377458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:37.884280 kubelet[2881]: E0120 15:12:37.883980 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:12:37.884280 kubelet[2881]: E0120 15:12:37.884043 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:12:37.886275 kubelet[2881]: E0120 15:12:37.886161 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a8dee0b7dc7f4ac08b9f27fb940bf054,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:37.887067 containerd[1658]: time="2026-01-20T15:12:37.886600042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 15:12:37.978527 containerd[1658]: time="2026-01-20T15:12:37.978389887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:37.980956 containerd[1658]: time="2026-01-20T15:12:37.980626733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 15:12:37.981212 containerd[1658]: time="2026-01-20T15:12:37.980743978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:37.981337 kubelet[2881]: E0120 15:12:37.981196 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:12:37.981337 kubelet[2881]: E0120 15:12:37.981255 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:12:37.981908 kubelet[2881]: E0120 15:12:37.981658 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:37.982083 containerd[1658]: time="2026-01-20T15:12:37.981734091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 15:12:37.983108 kubelet[2881]: E0120 15:12:37.982968 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:12:38.062999 containerd[1658]: time="2026-01-20T15:12:38.062600921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:38.066046 containerd[1658]: time="2026-01-20T15:12:38.065992694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 15:12:38.074633 containerd[1658]: time="2026-01-20T15:12:38.066342144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:38.075592 kubelet[2881]: E0120 15:12:38.067765 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:12:38.075592 kubelet[2881]: E0120 15:12:38.068332 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:12:38.075592 kubelet[2881]: E0120 15:12:38.069532 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:38.079482 kubelet[2881]: E0120 15:12:38.078294 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:12:38.639071 containerd[1658]: time="2026-01-20T15:12:38.639009167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 15:12:38.722598 containerd[1658]: time="2026-01-20T15:12:38.722470823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:38.725548 containerd[1658]: time="2026-01-20T15:12:38.725422206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 15:12:38.725663 containerd[1658]: time="2026-01-20T15:12:38.725558571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:38.726101 kubelet[2881]: E0120 15:12:38.726004 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:12:38.726101 kubelet[2881]: E0120 15:12:38.726074 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:12:38.726579 kubelet[2881]: E0120 15:12:38.726245 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztvjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bf79bffbc-qt64q_calico-system(746e4480-dd89-4ee6-ba05-3e214024a83b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:38.727558 kubelet[2881]: E0120 15:12:38.727534 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:12:40.637378 containerd[1658]: time="2026-01-20T15:12:40.637071501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:12:40.705345 containerd[1658]: time="2026-01-20T15:12:40.705234112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:12:40.707098 containerd[1658]: time="2026-01-20T15:12:40.707032642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:12:40.707243 containerd[1658]: time="2026-01-20T15:12:40.707102602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:12:40.707549 kubelet[2881]: E0120 15:12:40.707477 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:12:40.708168 kubelet[2881]: E0120 15:12:40.707551 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:12:40.708168 kubelet[2881]: E0120 15:12:40.707961 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9hzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-l6nzr_calico-apiserver(26e4b1e4-d471-4e91-bbfc-9aa64bff08f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:12:40.709417 kubelet[2881]: E0120 15:12:40.709334 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:12:42.032687 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:51206.service - OpenSSH per-connection server daemon (10.0.0.1:51206). Jan 20 15:12:42.039976 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:42.040042 kernel: audit: type=1130 audit(1768921962.032:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:42.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:42.118000 audit[5245]: USER_ACCT pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.119443 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 51206 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:42.123065 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:42.120000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.133305 systemd-logind[1631]: New session 14 of user core. Jan 20 15:12:42.145149 kernel: audit: type=1101 audit(1768921962.118:798): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.145230 kernel: audit: type=1103 audit(1768921962.120:799): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.145396 kernel: audit: type=1006 audit(1768921962.120:800): pid=5245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 20 15:12:42.120000 audit[5245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd061d8660 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:42.170943 kernel: audit: type=1300 audit(1768921962.120:800): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd061d8660 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:42.171121 kernel: audit: type=1327 audit(1768921962.120:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:42.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:42.178980 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 15:12:42.185000 audit[5245]: USER_START pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.188000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.219938 kernel: audit: type=1105 audit(1768921962.185:801): pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.220072 kernel: audit: type=1103 audit(1768921962.188:802): pid=5249 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.350049 sshd[5249]: Connection closed by 10.0.0.1 port 51206 Jan 20 15:12:42.351132 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:42.352000 audit[5245]: USER_END pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.376214 systemd-logind[1631]: Session 14 logged out. Waiting for processes to exit. Jan 20 15:12:42.376501 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:51206.service: Deactivated successfully. Jan 20 15:12:42.379586 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 15:12:42.381947 kernel: audit: type=1106 audit(1768921962.352:803): pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.352000 audit[5245]: CRED_DISP pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.386153 systemd-logind[1631]: Removed session 14. Jan 20 15:12:42.394040 kernel: audit: type=1104 audit(1768921962.352:804): pid=5245 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:42.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.116:22-10.0.0.1:51206 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:44.637496 kubelet[2881]: E0120 15:12:44.635577 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:12:45.633515 kubelet[2881]: E0120 15:12:45.633399 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:12:47.386218 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:35942.service - OpenSSH per-connection server daemon (10.0.0.1:35942). Jan 20 15:12:47.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:35942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:47.389890 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:47.389961 kernel: audit: type=1130 audit(1768921967.385:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:35942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:47.497639 sshd[5263]: Accepted publickey for core from 10.0.0.1 port 35942 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:47.496000 audit[5263]: USER_ACCT pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.502566 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:47.510406 systemd-logind[1631]: New session 15 of user core. Jan 20 15:12:47.511952 kernel: audit: type=1101 audit(1768921967.496:807): pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.499000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.526911 kernel: audit: type=1103 audit(1768921967.499:808): pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.537975 kernel: audit: type=1006 audit(1768921967.499:809): pid=5263 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 15:12:47.538258 kernel: audit: type=1300 audit(1768921967.499:809): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1156ef20 a2=3 a3=0 items=0 ppid=1 pid=5263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:47.499000 audit[5263]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1156ef20 a2=3 a3=0 items=0 ppid=1 pid=5263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:47.538316 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 15:12:47.499000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:47.577735 kernel: audit: type=1327 audit(1768921967.499:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:47.548000 audit[5263]: USER_START pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.551000 audit[5267]: CRED_ACQ pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.616483 kernel: audit: type=1105 audit(1768921967.548:810): pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.616656 kernel: audit: type=1103 audit(1768921967.551:811): pid=5267 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.803280 sshd[5267]: Connection closed by 10.0.0.1 port 35942 Jan 20 15:12:47.803596 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:47.808000 audit[5263]: USER_END pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.814998 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:35942.service: Deactivated successfully. Jan 20 15:12:47.819676 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 15:12:47.827651 systemd-logind[1631]: Session 15 logged out. Waiting for processes to exit. Jan 20 15:12:47.808000 audit[5263]: CRED_DISP pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.830096 systemd-logind[1631]: Removed session 15. Jan 20 15:12:47.844270 kernel: audit: type=1106 audit(1768921967.808:812): pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.844394 kernel: audit: type=1104 audit(1768921967.808:813): pid=5263 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:47.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.116:22-10.0.0.1:35942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:48.640166 kubelet[2881]: E0120 15:12:48.640009 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:12:49.635515 kubelet[2881]: E0120 15:12:49.635387 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:12:49.636997 kubelet[2881]: E0120 15:12:49.636782 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:12:50.634997 kubelet[2881]: E0120 15:12:50.634670 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:12:51.633796 kubelet[2881]: E0120 15:12:51.633709 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:12:52.826343 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:40524.service - OpenSSH per-connection server daemon (10.0.0.1:40524). Jan 20 15:12:52.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:40524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:52.830355 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:52.830450 kernel: audit: type=1130 audit(1768921972.826:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:40524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:52.949000 audit[5282]: USER_ACCT pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:52.952150 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 40524 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:52.954195 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:52.951000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:52.971370 systemd-logind[1631]: New session 16 of user core. Jan 20 15:12:52.984684 kernel: audit: type=1101 audit(1768921972.949:816): pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:52.984771 kernel: audit: type=1103 audit(1768921972.951:817): pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:52.984916 kernel: audit: type=1006 audit(1768921972.951:818): pid=5282 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 15:12:52.951000 audit[5282]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3f6ecf30 a2=3 a3=0 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:53.008909 kernel: audit: type=1300 audit(1768921972.951:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff3f6ecf30 a2=3 a3=0 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:53.009033 kernel: audit: type=1327 audit(1768921972.951:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:52.951000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:53.013311 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 15:12:53.021000 audit[5282]: USER_START pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.039916 kernel: audit: type=1105 audit(1768921973.021:819): pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.024000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.061684 kernel: audit: type=1103 audit(1768921973.024:820): pid=5286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.152135 sshd[5286]: Connection closed by 10.0.0.1 port 40524 Jan 20 15:12:53.153318 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:53.159000 audit[5282]: USER_END pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.165624 systemd-logind[1631]: Session 16 logged out. Waiting for processes to exit. Jan 20 15:12:53.167043 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:40524.service: Deactivated successfully. Jan 20 15:12:53.170997 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 15:12:53.176118 systemd-logind[1631]: Removed session 16. Jan 20 15:12:53.160000 audit[5282]: CRED_DISP pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.187575 kernel: audit: type=1106 audit(1768921973.159:821): pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.187736 kernel: audit: type=1104 audit(1768921973.160:822): pid=5282 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:53.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.116:22-10.0.0.1:40524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:56.634063 kubelet[2881]: E0120 15:12:56.632985 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:12:57.632299 kubelet[2881]: E0120 15:12:57.632194 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:12:58.174257 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:40530.service - OpenSSH per-connection server daemon (10.0.0.1:40530). Jan 20 15:12:58.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:40530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:58.177890 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:12:58.177942 kernel: audit: type=1130 audit(1768921978.173:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:40530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:12:58.259000 audit[5301]: USER_ACCT pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.264366 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:12:58.270514 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 40530 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:12:58.277019 systemd-logind[1631]: New session 17 of user core. Jan 20 15:12:58.261000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.290502 kernel: audit: type=1101 audit(1768921978.259:825): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.290592 kernel: audit: type=1103 audit(1768921978.261:826): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.298977 kernel: audit: type=1006 audit(1768921978.261:827): pid=5301 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 15:12:58.261000 audit[5301]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1426ae70 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:58.312943 kernel: audit: type=1300 audit(1768921978.261:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1426ae70 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:12:58.261000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:58.317934 kernel: audit: type=1327 audit(1768921978.261:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:12:58.321233 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 15:12:58.324000 audit[5301]: USER_START pid=5301 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.327000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.368808 kernel: audit: type=1105 audit(1768921978.324:828): pid=5301 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.369732 kernel: audit: type=1103 audit(1768921978.327:829): pid=5305 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.474267 sshd[5305]: Connection closed by 10.0.0.1 port 40530 Jan 20 15:12:58.475120 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Jan 20 15:12:58.476000 audit[5301]: USER_END pid=5301 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.482722 systemd-logind[1631]: Session 17 logged out. Waiting for processes to exit. Jan 20 15:12:58.483087 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:40530.service: Deactivated successfully. Jan 20 15:12:58.486995 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 15:12:58.489905 systemd-logind[1631]: Removed session 17. Jan 20 15:12:58.477000 audit[5301]: CRED_DISP pid=5301 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.491977 kernel: audit: type=1106 audit(1768921978.476:830): pid=5301 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.492017 kernel: audit: type=1104 audit(1768921978.477:831): pid=5301 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:12:58.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.116:22-10.0.0.1:40530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:00.637738 kubelet[2881]: E0120 15:13:00.637689 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:13:00.638488 kubelet[2881]: E0120 15:13:00.638369 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:13:01.633499 kubelet[2881]: E0120 15:13:01.633079 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:13:01.633499 kubelet[2881]: E0120 15:13:01.633160 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:13:03.498788 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:03.498979 kernel: audit: type=1130 audit(1768921983.494:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:35708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:03.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:35708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:03.495391 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:35708.service - OpenSSH per-connection server daemon (10.0.0.1:35708). Jan 20 15:13:03.582000 audit[5347]: USER_ACCT pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.583706 sshd[5347]: Accepted publickey for core from 10.0.0.1 port 35708 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:03.587387 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:03.584000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.605702 systemd-logind[1631]: New session 18 of user core. Jan 20 15:13:03.611393 kernel: audit: type=1101 audit(1768921983.582:834): pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.611487 kernel: audit: type=1103 audit(1768921983.584:835): pid=5347 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.584000 audit[5347]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3397a750 a2=3 a3=0 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:03.621225 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 15:13:03.635745 kernel: audit: type=1006 audit(1768921983.584:836): pid=5347 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 15:13:03.635952 kernel: audit: type=1300 audit(1768921983.584:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3397a750 a2=3 a3=0 items=0 ppid=1 pid=5347 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:03.584000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:03.642919 kernel: audit: type=1327 audit(1768921983.584:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:03.625000 audit[5347]: USER_START pid=5347 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.645096 kubelet[2881]: E0120 15:13:03.643351 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:13:03.628000 audit[5351]: CRED_ACQ pid=5351 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.680536 kernel: audit: type=1105 audit(1768921983.625:837): pid=5347 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.680709 kernel: audit: type=1103 audit(1768921983.628:838): pid=5351 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.792041 sshd[5351]: Connection closed by 10.0.0.1 port 35708 Jan 20 15:13:03.791802 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:03.793000 audit[5347]: USER_END pid=5347 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.799757 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:35708.service: Deactivated successfully. Jan 20 15:13:03.800238 systemd-logind[1631]: Session 18 logged out. Waiting for processes to exit. Jan 20 15:13:03.804748 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 15:13:03.808937 kernel: audit: type=1106 audit(1768921983.793:839): pid=5347 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.813108 systemd-logind[1631]: Removed session 18. Jan 20 15:13:03.793000 audit[5347]: CRED_DISP pid=5347 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:03.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.116:22-10.0.0.1:35708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:03.825947 kernel: audit: type=1104 audit(1768921983.793:840): pid=5347 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:04.633740 kubelet[2881]: E0120 15:13:04.633638 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:13:08.810489 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:35724.service - OpenSSH per-connection server daemon (10.0.0.1:35724). Jan 20 15:13:08.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:35724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:08.813586 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:08.813706 kernel: audit: type=1130 audit(1768921988.809:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:35724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:08.894000 audit[5365]: USER_ACCT pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:08.895766 sshd[5365]: Accepted publickey for core from 10.0.0.1 port 35724 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:08.900438 sshd-session[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:08.897000 audit[5365]: CRED_ACQ pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:08.909599 systemd-logind[1631]: New session 19 of user core. Jan 20 15:13:08.918264 kernel: audit: type=1101 audit(1768921988.894:843): pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:08.918956 kernel: audit: type=1103 audit(1768921988.897:844): pid=5365 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:08.919011 kernel: audit: type=1006 audit(1768921988.897:845): pid=5365 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 20 15:13:08.924902 kernel: audit: type=1300 audit(1768921988.897:845): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb3817e00 a2=3 a3=0 items=0 ppid=1 pid=5365 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:08.897000 audit[5365]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb3817e00 a2=3 a3=0 items=0 ppid=1 pid=5365 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:08.897000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:08.937927 kernel: audit: type=1327 audit(1768921988.897:845): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:08.939327 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 15:13:08.942000 audit[5365]: USER_START pid=5365 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:08.965952 kernel: audit: type=1105 audit(1768921988.942:846): pid=5365 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:08.966374 kernel: audit: type=1103 audit(1768921988.944:847): pid=5369 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:08.944000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.239059 sshd[5369]: Connection closed by 10.0.0.1 port 35724 Jan 20 15:13:09.240298 sshd-session[5365]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:09.241000 audit[5365]: USER_END pid=5365 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.242000 audit[5365]: CRED_DISP pid=5365 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.270152 kernel: audit: type=1106 audit(1768921989.241:848): pid=5365 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.270306 kernel: audit: type=1104 audit(1768921989.242:849): pid=5365 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.275681 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:35724.service: Deactivated successfully. Jan 20 15:13:09.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.116:22-10.0.0.1:35724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:09.281324 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 15:13:09.283297 systemd-logind[1631]: Session 19 logged out. Waiting for processes to exit. Jan 20 15:13:09.289723 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:35740.service - OpenSSH per-connection server daemon (10.0.0.1:35740). Jan 20 15:13:09.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.116:22-10.0.0.1:35740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:09.291797 systemd-logind[1631]: Removed session 19. Jan 20 15:13:09.356000 audit[5383]: USER_ACCT pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.359318 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 35740 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:09.358000 audit[5383]: CRED_ACQ pid=5383 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.358000 audit[5383]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc30864ee0 a2=3 a3=0 items=0 ppid=1 pid=5383 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:09.358000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:09.361087 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:09.368414 systemd-logind[1631]: New session 20 of user core. Jan 20 15:13:09.373128 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 15:13:09.382000 audit[5383]: USER_START pid=5383 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.385000 audit[5387]: CRED_ACQ pid=5387 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.565216 sshd[5387]: Connection closed by 10.0.0.1 port 35740 Jan 20 15:13:09.565278 sshd-session[5383]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:09.568000 audit[5383]: USER_END pid=5383 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.569000 audit[5383]: CRED_DISP pid=5383 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.584746 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:35740.service: Deactivated successfully. Jan 20 15:13:09.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.116:22-10.0.0.1:35740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:09.591155 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 15:13:09.592801 systemd-logind[1631]: Session 20 logged out. Waiting for processes to exit. Jan 20 15:13:09.597699 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:35746.service - OpenSSH per-connection server daemon (10.0.0.1:35746). Jan 20 15:13:09.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.116:22-10.0.0.1:35746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:09.599683 systemd-logind[1631]: Removed session 20. Jan 20 15:13:09.694000 audit[5399]: USER_ACCT pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.698218 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 35746 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:09.697000 audit[5399]: CRED_ACQ pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.697000 audit[5399]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd07f27ad0 a2=3 a3=0 items=0 ppid=1 pid=5399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:09.697000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:09.700475 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:09.711482 systemd-logind[1631]: New session 21 of user core. Jan 20 15:13:09.729256 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 15:13:09.733000 audit[5399]: USER_START pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.735000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.857980 sshd[5403]: Connection closed by 10.0.0.1 port 35746 Jan 20 15:13:09.858667 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:09.860000 audit[5399]: USER_END pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.861000 audit[5399]: CRED_DISP pid=5399 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:09.866210 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:35746.service: Deactivated successfully. Jan 20 15:13:09.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.116:22-10.0.0.1:35746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:09.869654 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 15:13:09.873126 systemd-logind[1631]: Session 21 logged out. Waiting for processes to exit. Jan 20 15:13:09.875451 systemd-logind[1631]: Removed session 21. Jan 20 15:13:12.634265 kubelet[2881]: E0120 15:13:12.634090 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:13:12.636630 kubelet[2881]: E0120 15:13:12.636305 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:13:13.633983 kubelet[2881]: E0120 15:13:13.633792 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:13:14.634699 kubelet[2881]: E0120 15:13:14.632419 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:13:14.885483 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:42212.service - OpenSSH per-connection server daemon (10.0.0.1:42212). Jan 20 15:13:14.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:42212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:14.893531 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 15:13:14.893607 kernel: audit: type=1130 audit(1768921994.884:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:42212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:14.986774 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 42212 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:14.983000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:14.992026 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:15.013590 kernel: audit: type=1101 audit(1768921994.983:870): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.013712 kernel: audit: type=1103 audit(1768921994.986:871): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:14.986000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.018725 systemd-logind[1631]: New session 22 of user core. Jan 20 15:13:14.986000 audit[5416]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedb310720 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:15.034149 kernel: audit: type=1006 audit(1768921994.986:872): pid=5416 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 15:13:15.034311 kernel: audit: type=1300 audit(1768921994.986:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedb310720 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:15.034343 kernel: audit: type=1327 audit(1768921994.986:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:14.986000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:15.044329 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 15:13:15.049000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.081884 kernel: audit: type=1105 audit(1768921995.049:873): pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.054000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.093958 kernel: audit: type=1103 audit(1768921995.054:874): pid=5420 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.210127 sshd[5420]: Connection closed by 10.0.0.1 port 42212 Jan 20 15:13:15.211189 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:15.219000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.224992 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:42212.service: Deactivated successfully. Jan 20 15:13:15.228514 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 15:13:15.230722 systemd-logind[1631]: Session 22 logged out. Waiting for processes to exit. Jan 20 15:13:15.233444 systemd-logind[1631]: Removed session 22. Jan 20 15:13:15.219000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.244918 kernel: audit: type=1106 audit(1768921995.219:875): pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.244991 kernel: audit: type=1104 audit(1768921995.219:876): pid=5416 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:15.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.116:22-10.0.0.1:42212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:15.634117 kubelet[2881]: E0120 15:13:15.634009 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:13:15.635433 kubelet[2881]: E0120 15:13:15.635339 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:13:19.632313 kubelet[2881]: E0120 15:13:19.632206 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:13:19.633561 kubelet[2881]: E0120 15:13:19.633214 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:13:20.231053 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:42218.service - OpenSSH per-connection server daemon (10.0.0.1:42218). Jan 20 15:13:20.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:42218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:20.236897 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:20.236996 kernel: audit: type=1130 audit(1768922000.232:878): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:42218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:20.397169 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 42218 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:20.395000 audit[5434]: USER_ACCT pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.401401 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:20.408932 systemd-logind[1631]: New session 23 of user core. Jan 20 15:13:20.398000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.420716 kernel: audit: type=1101 audit(1768922000.395:879): pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.420900 kernel: audit: type=1103 audit(1768922000.398:880): pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.420966 kernel: audit: type=1006 audit(1768922000.399:881): pid=5434 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 15:13:20.399000 audit[5434]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5359b930 a2=3 a3=0 items=0 ppid=1 pid=5434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:20.439592 kernel: audit: type=1300 audit(1768922000.399:881): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5359b930 a2=3 a3=0 items=0 ppid=1 pid=5434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:20.399000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:20.444537 kernel: audit: type=1327 audit(1768922000.399:881): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:20.445334 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 15:13:20.449000 audit[5434]: USER_START pid=5434 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.452000 audit[5438]: CRED_ACQ pid=5438 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.490379 kernel: audit: type=1105 audit(1768922000.449:882): pid=5434 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.490509 kernel: audit: type=1103 audit(1768922000.452:883): pid=5438 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.623504 sshd[5438]: Connection closed by 10.0.0.1 port 42218 Jan 20 15:13:20.624181 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:20.625000 audit[5434]: USER_END pid=5434 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.631237 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:42218.service: Deactivated successfully. Jan 20 15:13:20.638136 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 15:13:20.640921 kernel: audit: type=1106 audit(1768922000.625:884): pid=5434 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.640994 kernel: audit: type=1104 audit(1768922000.625:885): pid=5434 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.625000 audit[5434]: CRED_DISP pid=5434 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:20.642738 systemd-logind[1631]: Session 23 logged out. Waiting for processes to exit. Jan 20 15:13:20.645037 systemd-logind[1631]: Removed session 23. Jan 20 15:13:20.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.116:22-10.0.0.1:42218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:23.631521 kubelet[2881]: E0120 15:13:23.631425 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:13:24.634476 kubelet[2881]: E0120 15:13:24.634272 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:13:25.638112 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:44400.service - OpenSSH per-connection server daemon (10.0.0.1:44400). Jan 20 15:13:25.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:44400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:25.642916 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:25.642993 kernel: audit: type=1130 audit(1768922005.637:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:44400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:25.717942 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 44400 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:25.716000 audit[5451]: USER_ACCT pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.721761 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:25.719000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.732983 systemd-logind[1631]: New session 24 of user core. Jan 20 15:13:25.739833 kernel: audit: type=1101 audit(1768922005.716:888): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.739973 kernel: audit: type=1103 audit(1768922005.719:889): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.740011 kernel: audit: type=1006 audit(1768922005.719:890): pid=5451 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 15:13:25.719000 audit[5451]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa03608f0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:25.776659 kernel: audit: type=1300 audit(1768922005.719:890): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa03608f0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:25.776938 kernel: audit: type=1327 audit(1768922005.719:890): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:25.719000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:25.778286 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 15:13:25.783000 audit[5451]: USER_START pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.786000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.810563 kernel: audit: type=1105 audit(1768922005.783:891): pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.810705 kernel: audit: type=1103 audit(1768922005.786:892): pid=5455 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.910775 sshd[5455]: Connection closed by 10.0.0.1 port 44400 Jan 20 15:13:25.913106 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:25.914000 audit[5451]: USER_END pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.920194 systemd-logind[1631]: Session 24 logged out. Waiting for processes to exit. Jan 20 15:13:25.921313 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:44400.service: Deactivated successfully. Jan 20 15:13:25.925517 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 15:13:25.928461 systemd-logind[1631]: Removed session 24. Jan 20 15:13:25.914000 audit[5451]: CRED_DISP pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.942727 kernel: audit: type=1106 audit(1768922005.914:893): pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.942915 kernel: audit: type=1104 audit(1768922005.914:894): pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:25.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.116:22-10.0.0.1:44400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:26.636209 kubelet[2881]: E0120 15:13:26.636039 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:13:27.633444 kubelet[2881]: E0120 15:13:27.633358 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:13:28.633706 kubelet[2881]: E0120 15:13:28.633573 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:13:29.634046 kubelet[2881]: E0120 15:13:29.633838 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:13:30.632924 kubelet[2881]: E0120 15:13:30.632265 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:13:30.935211 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:44404.service - OpenSSH per-connection server daemon (10.0.0.1:44404). Jan 20 15:13:30.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:44404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:30.938281 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:30.938343 kernel: audit: type=1130 audit(1768922010.934:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:44404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:31.018000 audit[5471]: USER_ACCT pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.021340 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 44404 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:31.025504 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:31.022000 audit[5471]: CRED_ACQ pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.034999 systemd-logind[1631]: New session 25 of user core. Jan 20 15:13:31.046793 kernel: audit: type=1101 audit(1768922011.018:897): pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.046937 kernel: audit: type=1103 audit(1768922011.022:898): pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.046962 kernel: audit: type=1006 audit(1768922011.023:899): pid=5471 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 15:13:31.023000 audit[5471]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe92afa050 a2=3 a3=0 items=0 ppid=1 pid=5471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:31.073229 kernel: audit: type=1300 audit(1768922011.023:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe92afa050 a2=3 a3=0 items=0 ppid=1 pid=5471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:31.073304 kernel: audit: type=1327 audit(1768922011.023:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:31.023000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:31.080297 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 15:13:31.084000 audit[5471]: USER_START pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.087000 audit[5475]: CRED_ACQ pid=5475 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.113951 kernel: audit: type=1105 audit(1768922011.084:900): pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.114042 kernel: audit: type=1103 audit(1768922011.087:901): pid=5475 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.228147 sshd[5475]: Connection closed by 10.0.0.1 port 44404 Jan 20 15:13:31.228514 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:31.230000 audit[5471]: USER_END pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.244929 kernel: audit: type=1106 audit(1768922011.230:902): pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.245035 kernel: audit: type=1104 audit(1768922011.238:903): pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.238000 audit[5471]: CRED_DISP pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:31.249249 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:44404.service: Deactivated successfully. Jan 20 15:13:31.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.116:22-10.0.0.1:44404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:31.256318 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 15:13:31.266279 systemd-logind[1631]: Session 25 logged out. Waiting for processes to exit. Jan 20 15:13:31.269126 systemd-logind[1631]: Removed session 25. Jan 20 15:13:31.634480 kubelet[2881]: E0120 15:13:31.634408 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:13:36.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.116:22-10.0.0.1:44016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:36.245491 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:44016.service - OpenSSH per-connection server daemon (10.0.0.1:44016). Jan 20 15:13:36.248190 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:36.248297 kernel: audit: type=1130 audit(1768922016.244:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.116:22-10.0.0.1:44016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:36.329000 audit[5519]: USER_ACCT pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.330583 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 44016 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:36.333595 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:36.339435 systemd-logind[1631]: New session 26 of user core. Jan 20 15:13:36.340977 kernel: audit: type=1101 audit(1768922016.329:906): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.331000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.361574 kernel: audit: type=1103 audit(1768922016.331:907): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.361752 kernel: audit: type=1006 audit(1768922016.331:908): pid=5519 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 15:13:36.361835 kernel: audit: type=1300 audit(1768922016.331:908): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0a6136d0 a2=3 a3=0 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:36.331000 audit[5519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0a6136d0 a2=3 a3=0 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:36.362159 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 15:13:36.331000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:36.373000 audit[5519]: USER_START pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.392928 kernel: audit: type=1327 audit(1768922016.331:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:36.393158 kernel: audit: type=1105 audit(1768922016.373:909): pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.377000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.404734 kernel: audit: type=1103 audit(1768922016.377:910): pid=5523 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.546289 sshd[5523]: Connection closed by 10.0.0.1 port 44016 Jan 20 15:13:36.546938 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:36.548000 audit[5519]: USER_END pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.552701 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:44016.service: Deactivated successfully. Jan 20 15:13:36.570648 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 15:13:36.574128 systemd-logind[1631]: Session 26 logged out. Waiting for processes to exit. Jan 20 15:13:36.548000 audit[5519]: CRED_DISP pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.579175 systemd-logind[1631]: Removed session 26. Jan 20 15:13:36.588894 kernel: audit: type=1106 audit(1768922016.548:911): pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.588989 kernel: audit: type=1104 audit(1768922016.548:912): pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:36.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.116:22-10.0.0.1:44016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:36.636824 kubelet[2881]: E0120 15:13:36.636675 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:13:38.634532 kubelet[2881]: E0120 15:13:38.633944 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:13:40.634032 kubelet[2881]: E0120 15:13:40.633941 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:13:41.575626 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:41.575766 kernel: audit: type=1130 audit(1768922021.564:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.116:22-10.0.0.1:44018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:41.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.116:22-10.0.0.1:44018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:41.565452 systemd[1]: Started sshd@25-10.0.0.116:22-10.0.0.1:44018.service - OpenSSH per-connection server daemon (10.0.0.1:44018). Jan 20 15:13:41.633386 kubelet[2881]: E0120 15:13:41.633302 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:13:41.656000 audit[5537]: USER_ACCT pid=5537 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.662141 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 44018 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:41.666189 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:41.663000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.674350 systemd-logind[1631]: New session 27 of user core. Jan 20 15:13:41.676085 kernel: audit: type=1101 audit(1768922021.656:915): pid=5537 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.676124 kernel: audit: type=1103 audit(1768922021.663:916): pid=5537 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.676146 kernel: audit: type=1006 audit(1768922021.663:917): pid=5537 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 15:13:41.681587 kernel: audit: type=1300 audit(1768922021.663:917): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b8c4b70 a2=3 a3=0 items=0 ppid=1 pid=5537 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:41.663000 audit[5537]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1b8c4b70 a2=3 a3=0 items=0 ppid=1 pid=5537 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:41.694768 kernel: audit: type=1327 audit(1768922021.663:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:41.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:41.698330 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 15:13:41.702000 audit[5537]: USER_START pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.705000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.725172 kernel: audit: type=1105 audit(1768922021.702:918): pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.725294 kernel: audit: type=1103 audit(1768922021.705:919): pid=5541 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.822345 sshd[5541]: Connection closed by 10.0.0.1 port 44018 Jan 20 15:13:41.823724 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:41.825000 audit[5537]: USER_END pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.830398 systemd[1]: sshd@25-10.0.0.116:22-10.0.0.1:44018.service: Deactivated successfully. Jan 20 15:13:41.835136 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 15:13:41.839460 systemd-logind[1631]: Session 27 logged out. Waiting for processes to exit. Jan 20 15:13:41.841334 systemd-logind[1631]: Removed session 27. Jan 20 15:13:41.825000 audit[5537]: CRED_DISP pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.850699 kernel: audit: type=1106 audit(1768922021.825:920): pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.850774 kernel: audit: type=1104 audit(1768922021.825:921): pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:41.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.116:22-10.0.0.1:44018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:42.639878 kubelet[2881]: E0120 15:13:42.638780 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:13:45.633522 kubelet[2881]: E0120 15:13:45.633028 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:13:46.850292 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:46.850471 kernel: audit: type=1130 audit(1768922026.844:923): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.116:22-10.0.0.1:48158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:46.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.116:22-10.0.0.1:48158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:46.845272 systemd[1]: Started sshd@26-10.0.0.116:22-10.0.0.1:48158.service - OpenSSH per-connection server daemon (10.0.0.1:48158). Jan 20 15:13:46.973000 audit[5561]: USER_ACCT pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:46.975173 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 48158 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:46.979374 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:46.976000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:46.988332 systemd-logind[1631]: New session 28 of user core. Jan 20 15:13:46.997961 kernel: audit: type=1101 audit(1768922026.973:924): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:46.998055 kernel: audit: type=1103 audit(1768922026.976:925): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:46.998091 kernel: audit: type=1006 audit(1768922026.976:926): pid=5561 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 20 15:13:47.005353 kernel: audit: type=1300 audit(1768922026.976:926): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedeb94ca0 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:46.976000 audit[5561]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffedeb94ca0 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:47.017207 kernel: audit: type=1327 audit(1768922026.976:926): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:46.976000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:47.023470 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 15:13:47.031000 audit[5561]: USER_START pid=5561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.046966 kernel: audit: type=1105 audit(1768922027.031:927): pid=5561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.047000 audit[5565]: CRED_ACQ pid=5565 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.075385 kernel: audit: type=1103 audit(1768922027.047:928): pid=5565 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.232010 sshd[5565]: Connection closed by 10.0.0.1 port 48158 Jan 20 15:13:47.233223 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:47.235000 audit[5561]: USER_END pid=5561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.241273 systemd[1]: sshd@26-10.0.0.116:22-10.0.0.1:48158.service: Deactivated successfully. Jan 20 15:13:47.241629 systemd-logind[1631]: Session 28 logged out. Waiting for processes to exit. Jan 20 15:13:47.249190 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 15:13:47.253902 kernel: audit: type=1106 audit(1768922027.235:929): pid=5561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.260147 systemd-logind[1631]: Removed session 28. Jan 20 15:13:47.235000 audit[5561]: CRED_DISP pid=5561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.280896 kernel: audit: type=1104 audit(1768922027.235:930): pid=5561 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:47.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.116:22-10.0.0.1:48158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:49.637750 kubelet[2881]: E0120 15:13:49.637126 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:13:51.632087 kubelet[2881]: E0120 15:13:51.631976 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:13:52.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.116:22-10.0.0.1:48164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:52.255130 systemd[1]: Started sshd@27-10.0.0.116:22-10.0.0.1:48164.service - OpenSSH per-connection server daemon (10.0.0.1:48164). Jan 20 15:13:52.264999 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:52.265065 kernel: audit: type=1130 audit(1768922032.252:932): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.116:22-10.0.0.1:48164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:52.419000 audit[5580]: USER_ACCT pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.423716 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:52.425586 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 48164 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:52.434710 systemd-logind[1631]: New session 29 of user core. Jan 20 15:13:52.421000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.449378 kernel: audit: type=1101 audit(1768922032.419:933): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.449443 kernel: audit: type=1103 audit(1768922032.421:934): pid=5580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.449504 kernel: audit: type=1006 audit(1768922032.421:935): pid=5580 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 20 15:13:52.421000 audit[5580]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbac685e0 a2=3 a3=0 items=0 ppid=1 pid=5580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:52.490735 kernel: audit: type=1300 audit(1768922032.421:935): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbac685e0 a2=3 a3=0 items=0 ppid=1 pid=5580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:52.492256 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 15:13:52.421000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:52.497928 kernel: audit: type=1327 audit(1768922032.421:935): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:52.498000 audit[5580]: USER_START pid=5580 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.516912 kernel: audit: type=1105 audit(1768922032.498:936): pid=5580 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.502000 audit[5584]: CRED_ACQ pid=5584 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.537940 kernel: audit: type=1103 audit(1768922032.502:937): pid=5584 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.635551 kubelet[2881]: E0120 15:13:52.635364 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:13:52.650056 sshd[5584]: Connection closed by 10.0.0.1 port 48164 Jan 20 15:13:52.650473 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:52.651000 audit[5580]: USER_END pid=5580 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.651000 audit[5580]: CRED_DISP pid=5580 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.694245 systemd-logind[1631]: Session 29 logged out. Waiting for processes to exit. Jan 20 15:13:52.696714 systemd[1]: sshd@27-10.0.0.116:22-10.0.0.1:48164.service: Deactivated successfully. Jan 20 15:13:52.700662 kernel: audit: type=1106 audit(1768922032.651:938): pid=5580 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.700722 kernel: audit: type=1104 audit(1768922032.651:939): pid=5580 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:52.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.116:22-10.0.0.1:48164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:52.701026 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 15:13:52.704432 systemd-logind[1631]: Removed session 29. Jan 20 15:13:54.638989 containerd[1658]: time="2026-01-20T15:13:54.637377404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 15:13:54.736818 containerd[1658]: time="2026-01-20T15:13:54.736648844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:13:54.739124 containerd[1658]: time="2026-01-20T15:13:54.738933981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 15:13:54.739124 containerd[1658]: time="2026-01-20T15:13:54.739060167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 15:13:54.739342 kubelet[2881]: E0120 15:13:54.739265 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:13:54.739342 kubelet[2881]: E0120 15:13:54.739323 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 15:13:54.741531 kubelet[2881]: E0120 15:13:54.739491 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2mr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lmlng_calico-system(5fa31347-4392-4f2f-a0ac-7346e7069fc9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 15:13:54.741531 kubelet[2881]: E0120 15:13:54.741132 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:13:55.634204 kubelet[2881]: E0120 15:13:55.633992 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:13:55.635832 kubelet[2881]: E0120 15:13:55.635716 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:13:56.634918 kubelet[2881]: E0120 15:13:56.633427 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:13:57.635660 kubelet[2881]: E0120 15:13:57.635193 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:13:57.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.116:22-10.0.0.1:57376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:57.679555 systemd[1]: Started sshd@28-10.0.0.116:22-10.0.0.1:57376.service - OpenSSH per-connection server daemon (10.0.0.1:57376). Jan 20 15:13:57.681821 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:13:57.681920 kernel: audit: type=1130 audit(1768922037.678:941): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.116:22-10.0.0.1:57376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:13:57.787000 audit[5598]: USER_ACCT pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:57.791598 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:13:57.799356 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 57376 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:13:57.788000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:57.802957 systemd-logind[1631]: New session 30 of user core. Jan 20 15:13:57.814155 kernel: audit: type=1101 audit(1768922037.787:942): pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:57.814252 kernel: audit: type=1103 audit(1768922037.788:943): pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:57.814271 kernel: audit: type=1006 audit(1768922037.789:944): pid=5598 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 20 15:13:57.820340 kernel: audit: type=1300 audit(1768922037.789:944): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc99c51790 a2=3 a3=0 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:57.789000 audit[5598]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc99c51790 a2=3 a3=0 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:13:57.833271 kernel: audit: type=1327 audit(1768922037.789:944): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:57.789000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:13:57.838298 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 15:13:57.845000 audit[5598]: USER_START pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:57.849000 audit[5603]: CRED_ACQ pid=5603 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:57.875618 kernel: audit: type=1105 audit(1768922037.845:945): pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:57.875735 kernel: audit: type=1103 audit(1768922037.849:946): pid=5603 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:58.008000 audit[5598]: USER_END pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:58.010148 sshd[5603]: Connection closed by 10.0.0.1 port 57376 Jan 20 15:13:58.007354 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Jan 20 15:13:58.015561 systemd[1]: sshd@28-10.0.0.116:22-10.0.0.1:57376.service: Deactivated successfully. Jan 20 15:13:58.020112 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 15:13:58.008000 audit[5598]: CRED_DISP pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:58.023581 systemd-logind[1631]: Session 30 logged out. Waiting for processes to exit. Jan 20 15:13:58.029568 systemd-logind[1631]: Removed session 30. Jan 20 15:13:58.034504 kernel: audit: type=1106 audit(1768922038.008:947): pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:58.034670 kernel: audit: type=1104 audit(1768922038.008:948): pid=5598 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:13:58.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.116:22-10.0.0.1:57376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:02.638377 containerd[1658]: time="2026-01-20T15:14:02.638308478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 15:14:02.775373 containerd[1658]: time="2026-01-20T15:14:02.775120657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:14:02.776889 containerd[1658]: time="2026-01-20T15:14:02.776750295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 15:14:02.777018 containerd[1658]: time="2026-01-20T15:14:02.776899741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 15:14:02.777575 kubelet[2881]: E0120 15:14:02.777449 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:14:02.777980 kubelet[2881]: E0120 15:14:02.777574 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 15:14:02.777980 kubelet[2881]: E0120 15:14:02.777694 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 15:14:02.780603 containerd[1658]: time="2026-01-20T15:14:02.780536326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 15:14:02.844427 containerd[1658]: time="2026-01-20T15:14:02.844369600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:14:02.846039 containerd[1658]: time="2026-01-20T15:14:02.845897189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 15:14:02.846039 containerd[1658]: time="2026-01-20T15:14:02.845934551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 15:14:02.846377 kubelet[2881]: E0120 15:14:02.846310 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:14:02.846449 kubelet[2881]: E0120 15:14:02.846386 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 15:14:02.846656 kubelet[2881]: E0120 15:14:02.846544 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hdjx4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jr4nz_calico-system(c4e14075-1569-42bc-b38f-776a269a4fcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 15:14:02.848298 kubelet[2881]: E0120 15:14:02.848212 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:14:03.021600 systemd[1]: Started sshd@29-10.0.0.116:22-10.0.0.1:60480.service - OpenSSH per-connection server daemon (10.0.0.1:60480). Jan 20 15:14:03.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.116:22-10.0.0.1:60480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:03.024835 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:14:03.024954 kernel: audit: type=1130 audit(1768922043.021:950): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.116:22-10.0.0.1:60480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:03.099000 audit[5644]: USER_ACCT pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.101078 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 60480 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:03.104261 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:03.118699 kernel: audit: type=1101 audit(1768922043.099:951): pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.118913 kernel: audit: type=1103 audit(1768922043.101:952): pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.101000 audit[5644]: CRED_ACQ pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.114968 systemd-logind[1631]: New session 31 of user core. Jan 20 15:14:03.134310 kernel: audit: type=1006 audit(1768922043.102:953): pid=5644 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 20 15:14:03.134429 kernel: audit: type=1300 audit(1768922043.102:953): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccd7501f0 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:03.102000 audit[5644]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccd7501f0 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:03.102000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:03.138170 kernel: audit: type=1327 audit(1768922043.102:953): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:03.139320 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 15:14:03.143000 audit[5644]: USER_START pid=5644 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.146000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.166212 kernel: audit: type=1105 audit(1768922043.143:954): pid=5644 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.166320 kernel: audit: type=1103 audit(1768922043.146:955): pid=5648 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.255533 sshd[5648]: Connection closed by 10.0.0.1 port 60480 Jan 20 15:14:03.256089 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:03.259000 audit[5644]: USER_END pid=5644 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.279243 kernel: audit: type=1106 audit(1768922043.259:956): pid=5644 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.279377 kernel: audit: type=1104 audit(1768922043.259:957): pid=5644 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.259000 audit[5644]: CRED_DISP pid=5644 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.286297 systemd[1]: sshd@29-10.0.0.116:22-10.0.0.1:60480.service: Deactivated successfully. Jan 20 15:14:03.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.116:22-10.0.0.1:60480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:03.289417 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 15:14:03.291314 systemd-logind[1631]: Session 31 logged out. Waiting for processes to exit. Jan 20 15:14:03.295834 systemd[1]: Started sshd@30-10.0.0.116:22-10.0.0.1:60484.service - OpenSSH per-connection server daemon (10.0.0.1:60484). Jan 20 15:14:03.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.116:22-10.0.0.1:60484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:03.297248 systemd-logind[1631]: Removed session 31. Jan 20 15:14:03.366000 audit[5662]: USER_ACCT pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.368258 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 60484 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:03.368000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.368000 audit[5662]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffee7688bf0 a2=3 a3=0 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:03.368000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:03.370618 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:03.379519 systemd-logind[1631]: New session 32 of user core. Jan 20 15:14:03.388146 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 15:14:03.399000 audit[5662]: USER_START pid=5662 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.404000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.809210 sshd[5668]: Connection closed by 10.0.0.1 port 60484 Jan 20 15:14:03.809591 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:03.813000 audit[5662]: USER_END pid=5662 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.813000 audit[5662]: CRED_DISP pid=5662 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.821758 systemd[1]: sshd@30-10.0.0.116:22-10.0.0.1:60484.service: Deactivated successfully. Jan 20 15:14:03.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.116:22-10.0.0.1:60484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:03.825279 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 15:14:03.828183 systemd-logind[1631]: Session 32 logged out. Waiting for processes to exit. Jan 20 15:14:03.832176 systemd-logind[1631]: Removed session 32. Jan 20 15:14:03.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.116:22-10.0.0.1:60492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:03.835821 systemd[1]: Started sshd@31-10.0.0.116:22-10.0.0.1:60492.service - OpenSSH per-connection server daemon (10.0.0.1:60492). Jan 20 15:14:03.949000 audit[5680]: USER_ACCT pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.950664 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 60492 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:03.951000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.951000 audit[5680]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffce868260 a2=3 a3=0 items=0 ppid=1 pid=5680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:03.951000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:03.953941 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:03.972378 systemd-logind[1631]: New session 33 of user core. Jan 20 15:14:03.989151 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 15:14:03.993000 audit[5680]: USER_START pid=5680 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:03.997000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:04.795280 sshd[5684]: Connection closed by 10.0.0.1 port 60492 Jan 20 15:14:04.796062 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:04.803000 audit[5680]: USER_END pid=5680 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:04.803000 audit[5680]: CRED_DISP pid=5680 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:04.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.116:22-10.0.0.1:60494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:04.809252 systemd[1]: Started sshd@32-10.0.0.116:22-10.0.0.1:60494.service - OpenSSH per-connection server daemon (10.0.0.1:60494). Jan 20 15:14:04.813133 systemd[1]: sshd@31-10.0.0.116:22-10.0.0.1:60492.service: Deactivated successfully. Jan 20 15:14:04.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.116:22-10.0.0.1:60492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:04.818157 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 15:14:04.822964 systemd-logind[1631]: Session 33 logged out. Waiting for processes to exit. Jan 20 15:14:04.824682 systemd-logind[1631]: Removed session 33. Jan 20 15:14:04.889000 audit[5702]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:14:04.889000 audit[5702]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffa0584a10 a2=0 a3=7fffa05849fc items=0 ppid=3041 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:04.889000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:14:04.897000 audit[5696]: USER_ACCT pid=5696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:04.899187 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 60494 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:04.900000 audit[5696]: CRED_ACQ pid=5696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:04.900000 audit[5696]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeab3e39f0 a2=3 a3=0 items=0 ppid=1 pid=5696 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:04.900000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:04.900000 audit[5702]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:14:04.900000 audit[5702]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffa0584a10 a2=0 a3=7fffa05849fc items=0 ppid=3041 pid=5702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:04.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:14:04.902531 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:04.912650 systemd-logind[1631]: New session 34 of user core. Jan 20 15:14:04.921144 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 15:14:04.922000 audit[5705]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:14:04.922000 audit[5705]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff16474cc0 a2=0 a3=7fff16474cac items=0 ppid=3041 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:04.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:14:04.927000 audit[5696]: USER_START pid=5696 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:04.928000 audit[5705]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5705 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:14:04.928000 audit[5705]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff16474cc0 a2=0 a3=0 items=0 ppid=3041 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:04.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:14:04.931000 audit[5706]: CRED_ACQ pid=5706 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.242942 sshd[5706]: Connection closed by 10.0.0.1 port 60494 Jan 20 15:14:05.243383 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:05.245000 audit[5696]: USER_END pid=5696 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.245000 audit[5696]: CRED_DISP pid=5696 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.257527 systemd[1]: sshd@32-10.0.0.116:22-10.0.0.1:60494.service: Deactivated successfully. Jan 20 15:14:05.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.116:22-10.0.0.1:60494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:05.275725 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 15:14:05.278686 systemd-logind[1631]: Session 34 logged out. Waiting for processes to exit. Jan 20 15:14:05.284367 systemd[1]: Started sshd@33-10.0.0.116:22-10.0.0.1:60498.service - OpenSSH per-connection server daemon (10.0.0.1:60498). Jan 20 15:14:05.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.116:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:05.286066 systemd-logind[1631]: Removed session 34. Jan 20 15:14:05.347000 audit[5717]: USER_ACCT pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.348630 sshd[5717]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:05.349000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.349000 audit[5717]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc574fda50 a2=3 a3=0 items=0 ppid=1 pid=5717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:05.349000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:05.352055 sshd-session[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:05.371672 systemd-logind[1631]: New session 35 of user core. Jan 20 15:14:05.385190 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 15:14:05.389000 audit[5717]: USER_START pid=5717 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.392000 audit[5721]: CRED_ACQ pid=5721 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.517095 sshd[5721]: Connection closed by 10.0.0.1 port 60498 Jan 20 15:14:05.519164 sshd-session[5717]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:05.520000 audit[5717]: USER_END pid=5717 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.521000 audit[5717]: CRED_DISP pid=5717 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:05.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.116:22-10.0.0.1:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:05.526106 systemd[1]: sshd@33-10.0.0.116:22-10.0.0.1:60498.service: Deactivated successfully. Jan 20 15:14:05.530753 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 15:14:05.535150 systemd-logind[1631]: Session 35 logged out. Waiting for processes to exit. Jan 20 15:14:05.537207 systemd-logind[1631]: Removed session 35. Jan 20 15:14:06.636642 containerd[1658]: time="2026-01-20T15:14:06.634201533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 15:14:06.732622 containerd[1658]: time="2026-01-20T15:14:06.732491551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:14:06.734662 containerd[1658]: time="2026-01-20T15:14:06.734556251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 15:14:06.734662 containerd[1658]: time="2026-01-20T15:14:06.734674432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 15:14:06.735142 kubelet[2881]: E0120 15:14:06.735044 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:14:06.735716 kubelet[2881]: E0120 15:14:06.735149 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 15:14:06.735716 kubelet[2881]: E0120 15:14:06.735530 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztvjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bf79bffbc-qt64q_calico-system(746e4480-dd89-4ee6-ba05-3e214024a83b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 15:14:06.736282 containerd[1658]: time="2026-01-20T15:14:06.736210286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:14:06.737925 kubelet[2881]: E0120 15:14:06.737679 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:14:06.841132 containerd[1658]: time="2026-01-20T15:14:06.840997535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:14:06.842488 containerd[1658]: time="2026-01-20T15:14:06.842433804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:14:06.842734 containerd[1658]: time="2026-01-20T15:14:06.842488230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:14:06.843052 kubelet[2881]: E0120 15:14:06.842963 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:14:06.843119 kubelet[2881]: E0120 15:14:06.843060 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:14:06.843319 kubelet[2881]: E0120 15:14:06.843224 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9ccp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-bggk4_calico-apiserver(5b2bf4f6-7ff7-4a4a-b602-713112aeec36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:14:06.845042 kubelet[2881]: E0120 15:14:06.844963 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:14:07.638493 kubelet[2881]: E0120 15:14:07.638389 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:14:08.634661 containerd[1658]: time="2026-01-20T15:14:08.634341552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 15:14:08.710345 containerd[1658]: time="2026-01-20T15:14:08.710248168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:14:08.712485 containerd[1658]: time="2026-01-20T15:14:08.712405823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 15:14:08.712485 containerd[1658]: time="2026-01-20T15:14:08.712443516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 15:14:08.713256 kubelet[2881]: E0120 15:14:08.713023 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:14:08.713256 kubelet[2881]: E0120 15:14:08.713118 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 15:14:08.713747 kubelet[2881]: E0120 15:14:08.713314 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9hzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6848f96b7-l6nzr_calico-apiserver(26e4b1e4-d471-4e91-bbfc-9aa64bff08f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 15:14:08.715282 kubelet[2881]: E0120 15:14:08.715206 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:14:09.634121 containerd[1658]: time="2026-01-20T15:14:09.633976018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 15:14:09.713964 containerd[1658]: time="2026-01-20T15:14:09.713893468Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:14:09.715634 containerd[1658]: time="2026-01-20T15:14:09.715490352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 15:14:09.715634 containerd[1658]: time="2026-01-20T15:14:09.715529666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 15:14:09.716094 kubelet[2881]: E0120 15:14:09.716000 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:14:09.716094 kubelet[2881]: E0120 15:14:09.716080 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 15:14:09.716533 kubelet[2881]: E0120 15:14:09.716206 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a8dee0b7dc7f4ac08b9f27fb940bf054,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 15:14:09.719416 containerd[1658]: time="2026-01-20T15:14:09.719009156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 15:14:09.796146 containerd[1658]: time="2026-01-20T15:14:09.796060188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 15:14:09.797908 containerd[1658]: time="2026-01-20T15:14:09.797685302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 15:14:09.798259 kubelet[2881]: E0120 15:14:09.798204 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:14:09.798259 kubelet[2881]: E0120 15:14:09.798257 2881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 15:14:09.798420 kubelet[2881]: E0120 15:14:09.798356 2881 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j2bst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd6f47785-wdj86_calico-system(314fd9f9-2d2b-4b58-a692-6f702aedf12f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 15:14:09.798609 containerd[1658]: time="2026-01-20T15:14:09.797740289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 15:14:09.799942 kubelet[2881]: E0120 15:14:09.799755 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:14:10.539051 systemd[1]: Started sshd@34-10.0.0.116:22-10.0.0.1:60510.service - OpenSSH per-connection server daemon (10.0.0.1:60510). Jan 20 15:14:10.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.116:22-10.0.0.1:60510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:10.542160 kernel: kauditd_printk_skb: 57 callbacks suppressed Jan 20 15:14:10.542268 kernel: audit: type=1130 audit(1768922050.538:999): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.116:22-10.0.0.1:60510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:10.620000 audit[5755]: USER_ACCT pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.621494 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 60510 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:10.626035 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:10.632939 kernel: audit: type=1101 audit(1768922050.620:1000): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.623000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.642948 systemd-logind[1631]: New session 36 of user core. Jan 20 15:14:10.647006 kernel: audit: type=1103 audit(1768922050.623:1001): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.647168 kernel: audit: type=1006 audit(1768922050.623:1002): pid=5755 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 20 15:14:10.623000 audit[5755]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9c40c560 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:10.671323 kernel: audit: type=1300 audit(1768922050.623:1002): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9c40c560 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:10.671626 kernel: audit: type=1327 audit(1768922050.623:1002): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:10.623000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:10.676319 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 15:14:10.679000 audit[5755]: USER_START pid=5755 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.683000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.703972 kernel: audit: type=1105 audit(1768922050.679:1003): pid=5755 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.704041 kernel: audit: type=1103 audit(1768922050.683:1004): pid=5759 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.792606 sshd[5759]: Connection closed by 10.0.0.1 port 60510 Jan 20 15:14:10.793118 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:10.794000 audit[5755]: USER_END pid=5755 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.799450 systemd[1]: sshd@34-10.0.0.116:22-10.0.0.1:60510.service: Deactivated successfully. Jan 20 15:14:10.803544 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 15:14:10.807584 systemd-logind[1631]: Session 36 logged out. Waiting for processes to exit. Jan 20 15:14:10.809391 systemd-logind[1631]: Removed session 36. Jan 20 15:14:10.794000 audit[5755]: CRED_DISP pid=5755 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.821193 kernel: audit: type=1106 audit(1768922050.794:1005): pid=5755 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.821283 kernel: audit: type=1104 audit(1768922050.794:1006): pid=5755 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:10.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.116:22-10.0.0.1:60510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:14.632962 kubelet[2881]: E0120 15:14:14.632811 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:14:15.641911 kubelet[2881]: E0120 15:14:15.641203 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:14:15.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.116:22-10.0.0.1:55130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:15.814125 systemd[1]: Started sshd@35-10.0.0.116:22-10.0.0.1:55130.service - OpenSSH per-connection server daemon (10.0.0.1:55130). Jan 20 15:14:15.817101 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:14:15.817179 kernel: audit: type=1130 audit(1768922055.813:1008): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.116:22-10.0.0.1:55130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:15.911000 audit[5772]: USER_ACCT pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:15.913167 sshd[5772]: Accepted publickey for core from 10.0.0.1 port 55130 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:15.918343 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:15.924010 kernel: audit: type=1101 audit(1768922055.911:1009): pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:15.924071 kernel: audit: type=1103 audit(1768922055.915:1010): pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:15.915000 audit[5772]: CRED_ACQ pid=5772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:15.931012 systemd-logind[1631]: New session 37 of user core. Jan 20 15:14:15.941716 kernel: audit: type=1006 audit(1768922055.915:1011): pid=5772 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 20 15:14:15.941930 kernel: audit: type=1300 audit(1768922055.915:1011): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcc1e1100 a2=3 a3=0 items=0 ppid=1 pid=5772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:15.915000 audit[5772]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffcc1e1100 a2=3 a3=0 items=0 ppid=1 pid=5772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:15.915000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:15.973465 kernel: audit: type=1327 audit(1768922055.915:1011): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:15.980184 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 15:14:15.984000 audit[5772]: USER_START pid=5772 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:15.988000 audit[5776]: CRED_ACQ pid=5776 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:16.013938 kernel: audit: type=1105 audit(1768922055.984:1012): pid=5772 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:16.014038 kernel: audit: type=1103 audit(1768922055.988:1013): pid=5776 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:16.105426 sshd[5776]: Connection closed by 10.0.0.1 port 55130 Jan 20 15:14:16.105934 sshd-session[5772]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:16.107000 audit[5772]: USER_END pid=5772 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:16.111438 systemd-logind[1631]: Session 37 logged out. Waiting for processes to exit. Jan 20 15:14:16.113416 systemd[1]: sshd@35-10.0.0.116:22-10.0.0.1:55130.service: Deactivated successfully. Jan 20 15:14:16.116714 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 15:14:16.107000 audit[5772]: CRED_DISP pid=5772 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:16.120087 systemd-logind[1631]: Removed session 37. Jan 20 15:14:16.127617 kernel: audit: type=1106 audit(1768922056.107:1014): pid=5772 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:16.127702 kernel: audit: type=1104 audit(1768922056.107:1015): pid=5772 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:16.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.116:22-10.0.0.1:55130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:17.532000 audit[5790]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:14:17.532000 audit[5790]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7f8f4490 a2=0 a3=7ffc7f8f447c items=0 ppid=3041 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:17.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:14:17.544000 audit[5790]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 15:14:17.544000 audit[5790]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc7f8f4490 a2=0 a3=7ffc7f8f447c items=0 ppid=3041 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:17.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 15:14:18.633901 kubelet[2881]: E0120 15:14:18.632472 2881 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 15:14:18.638180 kubelet[2881]: E0120 15:14:18.638006 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:14:19.635012 kubelet[2881]: E0120 15:14:19.634943 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-l6nzr" podUID="26e4b1e4-d471-4e91-bbfc-9aa64bff08f3" Jan 20 15:14:21.126893 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 15:14:21.127040 kernel: audit: type=1130 audit(1768922061.122:1019): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.116:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:21.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.116:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:21.123240 systemd[1]: Started sshd@36-10.0.0.116:22-10.0.0.1:55132.service - OpenSSH per-connection server daemon (10.0.0.1:55132). Jan 20 15:14:21.210000 audit[5792]: USER_ACCT pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.215028 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 55132 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:21.222509 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:21.227929 kernel: audit: type=1101 audit(1768922061.210:1020): pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.219000 audit[5792]: CRED_ACQ pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.241027 kernel: audit: type=1103 audit(1768922061.219:1021): pid=5792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.238053 systemd-logind[1631]: New session 38 of user core. Jan 20 15:14:21.279084 kernel: audit: type=1006 audit(1768922061.220:1022): pid=5792 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 20 15:14:21.279209 kernel: audit: type=1300 audit(1768922061.220:1022): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe82279410 a2=3 a3=0 items=0 ppid=1 pid=5792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:21.220000 audit[5792]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe82279410 a2=3 a3=0 items=0 ppid=1 pid=5792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:21.250207 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 15:14:21.220000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:21.262000 audit[5792]: USER_START pid=5792 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.301666 kernel: audit: type=1327 audit(1768922061.220:1022): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:21.303008 kernel: audit: type=1105 audit(1768922061.262:1023): pid=5792 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.303065 kernel: audit: type=1103 audit(1768922061.278:1024): pid=5799 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.278000 audit[5799]: CRED_ACQ pid=5799 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.421256 sshd[5799]: Connection closed by 10.0.0.1 port 55132 Jan 20 15:14:21.422278 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:21.424000 audit[5792]: USER_END pid=5792 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.432021 systemd-logind[1631]: Session 38 logged out. Waiting for processes to exit. Jan 20 15:14:21.433676 systemd[1]: sshd@36-10.0.0.116:22-10.0.0.1:55132.service: Deactivated successfully. Jan 20 15:14:21.436913 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 15:14:21.439403 systemd-logind[1631]: Removed session 38. Jan 20 15:14:21.425000 audit[5792]: CRED_DISP pid=5792 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.454542 kernel: audit: type=1106 audit(1768922061.424:1025): pid=5792 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.454664 kernel: audit: type=1104 audit(1768922061.425:1026): pid=5792 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:21.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.116:22-10.0.0.1:55132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:21.632603 kubelet[2881]: E0120 15:14:21.632545 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bf79bffbc-qt64q" podUID="746e4480-dd89-4ee6-ba05-3e214024a83b" Jan 20 15:14:21.634816 kubelet[2881]: E0120 15:14:21.634732 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f" Jan 20 15:14:22.632832 kubelet[2881]: E0120 15:14:22.632633 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lmlng" podUID="5fa31347-4392-4f2f-a0ac-7346e7069fc9" Jan 20 15:14:26.440327 systemd[1]: Started sshd@37-10.0.0.116:22-10.0.0.1:41732.service - OpenSSH per-connection server daemon (10.0.0.1:41732). Jan 20 15:14:26.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.116:22-10.0.0.1:41732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:26.442459 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:14:26.442547 kernel: audit: type=1130 audit(1768922066.439:1028): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.116:22-10.0.0.1:41732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:26.517000 audit[5815]: USER_ACCT pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.518580 sshd[5815]: Accepted publickey for core from 10.0.0.1 port 41732 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:26.521172 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:26.528253 kernel: audit: type=1101 audit(1768922066.517:1029): pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.518000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.530000 systemd-logind[1631]: New session 39 of user core. Jan 20 15:14:26.537932 kernel: audit: type=1103 audit(1768922066.518:1030): pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.538008 kernel: audit: type=1006 audit(1768922066.518:1031): pid=5815 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 20 15:14:26.518000 audit[5815]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa1c2e9d0 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:26.545564 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 15:14:26.554987 kernel: audit: type=1300 audit(1768922066.518:1031): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa1c2e9d0 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:26.555289 kernel: audit: type=1327 audit(1768922066.518:1031): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:26.518000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:26.558958 kernel: audit: type=1105 audit(1768922066.550:1032): pid=5815 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.550000 audit[5815]: USER_START pid=5815 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.553000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.577925 kernel: audit: type=1103 audit(1768922066.553:1033): pid=5819 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.639746 kubelet[2881]: E0120 15:14:26.639649 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jr4nz" podUID="c4e14075-1569-42bc-b38f-776a269a4fcd" Jan 20 15:14:26.651325 sshd[5819]: Connection closed by 10.0.0.1 port 41732 Jan 20 15:14:26.651689 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:26.654000 audit[5815]: USER_END pid=5815 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.660640 systemd[1]: sshd@37-10.0.0.116:22-10.0.0.1:41732.service: Deactivated successfully. Jan 20 15:14:26.663318 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 15:14:26.664527 systemd-logind[1631]: Session 39 logged out. Waiting for processes to exit. Jan 20 15:14:26.666262 systemd-logind[1631]: Removed session 39. Jan 20 15:14:26.654000 audit[5815]: CRED_DISP pid=5815 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.678493 kernel: audit: type=1106 audit(1768922066.654:1034): pid=5815 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.678670 kernel: audit: type=1104 audit(1768922066.654:1035): pid=5815 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:26.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.116:22-10.0.0.1:41732 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:31.633384 kubelet[2881]: E0120 15:14:31.633289 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6848f96b7-bggk4" podUID="5b2bf4f6-7ff7-4a4a-b602-713112aeec36" Jan 20 15:14:31.670515 systemd[1]: Started sshd@38-10.0.0.116:22-10.0.0.1:41744.service - OpenSSH per-connection server daemon (10.0.0.1:41744). Jan 20 15:14:31.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.116:22-10.0.0.1:41744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:31.674154 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 15:14:31.674230 kernel: audit: type=1130 audit(1768922071.669:1037): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.116:22-10.0.0.1:41744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:31.751000 audit[5836]: USER_ACCT pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.752364 sshd[5836]: Accepted publickey for core from 10.0.0.1 port 41744 ssh2: RSA SHA256:CHg9qdQh9zEeIc2UiyDRuRMIax/ZShJjltjZVpTjR3I Jan 20 15:14:31.759311 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 15:14:31.779658 kernel: audit: type=1101 audit(1768922071.751:1038): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.779816 kernel: audit: type=1103 audit(1768922071.753:1039): pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.753000 audit[5836]: CRED_ACQ pid=5836 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.773669 systemd-logind[1631]: New session 40 of user core. Jan 20 15:14:31.800192 kernel: audit: type=1006 audit(1768922071.753:1040): pid=5836 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 20 15:14:31.800342 kernel: audit: type=1300 audit(1768922071.753:1040): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde89cfb80 a2=3 a3=0 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:31.753000 audit[5836]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde89cfb80 a2=3 a3=0 items=0 ppid=1 pid=5836 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 15:14:31.804816 kernel: audit: type=1327 audit(1768922071.753:1040): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:31.753000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 15:14:31.806310 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 15:14:31.829983 kernel: audit: type=1105 audit(1768922071.814:1041): pid=5836 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.814000 audit[5836]: USER_START pid=5836 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.830000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.843953 kernel: audit: type=1103 audit(1768922071.830:1042): pid=5840 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.993245 sshd[5840]: Connection closed by 10.0.0.1 port 41744 Jan 20 15:14:31.993203 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Jan 20 15:14:31.994000 audit[5836]: USER_END pid=5836 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:31.999722 systemd-logind[1631]: Session 40 logged out. Waiting for processes to exit. Jan 20 15:14:32.002612 systemd[1]: sshd@38-10.0.0.116:22-10.0.0.1:41744.service: Deactivated successfully. Jan 20 15:14:32.006654 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 15:14:32.010376 systemd-logind[1631]: Removed session 40. Jan 20 15:14:31.995000 audit[5836]: CRED_DISP pid=5836 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:32.021651 kernel: audit: type=1106 audit(1768922071.994:1043): pid=5836 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:32.021743 kernel: audit: type=1104 audit(1768922071.995:1044): pid=5836 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 15:14:32.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.116:22-10.0.0.1:41744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 15:14:33.633671 kubelet[2881]: E0120 15:14:33.633388 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd6f47785-wdj86" podUID="314fd9f9-2d2b-4b58-a692-6f702aedf12f"