Jan 20 01:47:42.908062 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:31:13 -00 2026 Jan 20 01:47:42.908115 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 01:47:42.908136 kernel: BIOS-provided physical RAM map: Jan 20 01:47:42.908151 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 01:47:42.908159 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 01:47:42.908168 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 01:47:42.908178 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 01:47:42.908187 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 01:47:42.908196 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 01:47:42.908204 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 01:47:42.908214 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 20 01:47:42.908226 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 01:47:42.908235 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 01:47:42.908244 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 01:47:42.908255 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 01:47:42.908264 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 01:47:42.908276 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 01:47:42.908285 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 01:47:42.908294 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 01:47:42.908304 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 01:47:42.908313 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 01:47:42.908323 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 01:47:42.908384 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 01:47:42.908398 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:47:42.908407 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 01:47:42.908417 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:47:42.908430 kernel: NX (Execute Disable) protection: active Jan 20 01:47:42.908440 kernel: APIC: Static calls initialized Jan 20 01:47:42.908450 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 20 01:47:42.908460 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 20 01:47:42.908469 kernel: extended physical RAM map: Jan 20 01:47:42.908478 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 01:47:42.908488 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 20 01:47:42.908497 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 20 01:47:42.908506 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 20 01:47:42.908516 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 20 01:47:42.908526 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 20 01:47:42.908539 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 20 01:47:42.908549 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 20 01:47:42.908559 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 20 01:47:42.908574 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 20 01:47:42.911279 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 20 01:47:42.911295 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 20 01:47:42.911306 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 20 01:47:42.911316 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 20 01:47:42.911327 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 20 01:47:42.911381 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 20 01:47:42.911391 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 20 01:47:42.911401 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 20 01:47:42.911412 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 20 01:47:42.911427 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 20 01:47:42.911437 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 20 01:47:42.911446 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 20 01:47:42.911456 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 20 01:47:42.911466 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 20 01:47:42.911476 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:47:42.911486 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 20 01:47:42.911499 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 20 01:47:42.911509 kernel: efi: EFI v2.7 by EDK II Jan 20 01:47:42.911519 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 20 01:47:42.911528 kernel: random: crng init done Jan 20 01:47:42.911543 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 20 01:47:42.911553 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 20 01:47:42.911563 kernel: secureboot: Secure boot disabled Jan 20 01:47:42.911573 kernel: SMBIOS 2.8 present. Jan 20 01:47:42.911621 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 20 01:47:42.911633 kernel: DMI: Memory slots populated: 1/1 Jan 20 01:47:42.911642 kernel: Hypervisor detected: KVM Jan 20 01:47:42.911653 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 01:47:42.911663 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:47:42.911673 kernel: kvm-clock: using sched offset of 26340674413 cycles Jan 20 01:47:42.911684 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:47:42.911699 kernel: tsc: Detected 2445.426 MHz processor Jan 20 01:47:42.911710 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:47:42.911720 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:47:42.911731 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 20 01:47:42.911742 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 01:47:42.911752 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:47:42.911763 kernel: Using GB pages for direct mapping Jan 20 01:47:42.911777 kernel: ACPI: Early table checksum verification disabled Jan 20 01:47:42.911787 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 20 01:47:42.911798 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 20 01:47:42.911810 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:47:42.911823 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:47:42.911833 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 20 01:47:42.911844 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:47:42.911859 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:47:42.911869 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:47:42.911880 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:47:42.911891 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 01:47:42.911903 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 20 01:47:42.911914 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 20 01:47:42.911925 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 20 01:47:42.911939 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 20 01:47:42.911951 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 20 01:47:42.911962 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 20 01:47:42.911973 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 20 01:47:42.911984 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 20 01:47:42.911994 kernel: No NUMA configuration found Jan 20 01:47:42.912007 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 20 01:47:42.912017 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 20 01:47:42.912032 kernel: Zone ranges: Jan 20 01:47:42.912044 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:47:42.912056 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 20 01:47:42.912067 kernel: Normal empty Jan 20 01:47:42.912078 kernel: Device empty Jan 20 01:47:42.912089 kernel: Movable zone start for each node Jan 20 01:47:42.912100 kernel: Early memory node ranges Jan 20 01:47:42.912114 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 01:47:42.912125 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 20 01:47:42.912136 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 20 01:47:42.912147 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 20 01:47:42.912157 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 20 01:47:42.912168 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 20 01:47:42.912178 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 20 01:47:42.912189 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 20 01:47:42.912203 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 20 01:47:42.912214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:47:42.912235 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 01:47:42.912249 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 20 01:47:42.912260 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:47:42.912272 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 20 01:47:42.912283 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 20 01:47:42.912294 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 20 01:47:42.912305 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 20 01:47:42.912319 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 20 01:47:42.912330 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:47:42.912393 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:47:42.912404 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:47:42.912419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:47:42.912430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:47:42.912441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:47:42.912452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:47:42.912463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:47:42.912474 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:47:42.912485 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 20 01:47:42.912499 kernel: TSC deadline timer available Jan 20 01:47:42.912510 kernel: CPU topo: Max. logical packages: 1 Jan 20 01:47:42.912521 kernel: CPU topo: Max. logical dies: 1 Jan 20 01:47:42.912532 kernel: CPU topo: Max. dies per package: 1 Jan 20 01:47:42.912542 kernel: CPU topo: Max. threads per core: 1 Jan 20 01:47:42.912553 kernel: CPU topo: Num. cores per package: 4 Jan 20 01:47:42.912564 kernel: CPU topo: Num. threads per package: 4 Jan 20 01:47:42.912578 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 20 01:47:42.912630 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:47:42.912642 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 20 01:47:42.912652 kernel: kvm-guest: setup PV sched yield Jan 20 01:47:42.912664 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 20 01:47:42.912675 kernel: Booting paravirtualized kernel on KVM Jan 20 01:47:42.912687 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:47:42.912699 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 20 01:47:42.912716 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 20 01:47:42.912728 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 20 01:47:42.912739 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 20 01:47:42.912750 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:47:42.912762 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:47:42.912775 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 01:47:42.912789 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:47:42.912800 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:47:42.912811 kernel: Fallback order for Node 0: 0 Jan 20 01:47:42.912822 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 20 01:47:42.912833 kernel: Policy zone: DMA32 Jan 20 01:47:42.912845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:47:42.912856 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 01:47:42.912870 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 01:47:42.912882 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 01:47:42.912894 kernel: Dynamic Preempt: voluntary Jan 20 01:47:42.912904 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:47:42.912923 kernel: rcu: RCU event tracing is enabled. Jan 20 01:47:42.912935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 01:47:42.912947 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:47:42.912961 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:47:42.912972 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:47:42.912982 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:47:42.912993 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 01:47:42.913004 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:47:42.913015 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:47:42.913027 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 01:47:42.913040 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 20 01:47:42.913051 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:47:42.913062 kernel: Console: colour dummy device 80x25 Jan 20 01:47:42.913073 kernel: printk: legacy console [ttyS0] enabled Jan 20 01:47:42.913084 kernel: ACPI: Core revision 20240827 Jan 20 01:47:42.913095 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 20 01:47:42.913106 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:47:42.913117 kernel: x2apic enabled Jan 20 01:47:42.913131 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:47:42.913142 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 20 01:47:42.913153 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 20 01:47:42.913165 kernel: kvm-guest: setup PV IPIs Jan 20 01:47:42.913176 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 20 01:47:42.913187 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:47:42.913199 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 20 01:47:42.913213 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:47:42.913224 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 20 01:47:42.913235 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 20 01:47:42.913246 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:47:42.913257 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:47:42.913268 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:47:42.913279 kernel: Speculative Store Bypass: Vulnerable Jan 20 01:47:42.913293 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 20 01:47:42.913305 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 20 01:47:42.913316 kernel: active return thunk: srso_alias_return_thunk Jan 20 01:47:42.913327 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 20 01:47:42.913391 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 20 01:47:42.913403 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 20 01:47:42.913418 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:47:42.913429 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:47:42.913440 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:47:42.913452 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:47:42.913463 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 20 01:47:42.913474 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:47:42.913485 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:47:42.913498 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 01:47:42.913509 kernel: landlock: Up and running. Jan 20 01:47:42.913520 kernel: SELinux: Initializing. Jan 20 01:47:42.913531 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:47:42.913543 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:47:42.913553 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 20 01:47:42.913565 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 20 01:47:42.913579 kernel: signal: max sigframe size: 1776 Jan 20 01:47:42.925749 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:47:42.925767 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:47:42.925779 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 01:47:42.925790 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:47:42.925801 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:47:42.925812 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:47:42.925823 kernel: .... node #0, CPUs: #1 #2 #3 Jan 20 01:47:42.925843 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 01:47:42.925856 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 20 01:47:42.925871 kernel: Memory: 2441096K/2565800K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 118764K reserved, 0K cma-reserved) Jan 20 01:47:42.925883 kernel: devtmpfs: initialized Jan 20 01:47:42.925893 kernel: x86/mm: Memory block size: 128MB Jan 20 01:47:42.925908 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 20 01:47:42.925921 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 20 01:47:42.925938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 20 01:47:42.925950 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 20 01:47:42.925961 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 20 01:47:42.925972 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 20 01:47:42.925984 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:47:42.925995 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 01:47:42.926009 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:47:42.926021 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:47:42.926032 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:47:42.926044 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:47:42.926054 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:47:42.926065 kernel: audit: type=2000 audit(1768873629.315:1): state=initialized audit_enabled=0 res=1 Jan 20 01:47:42.926076 kernel: cpuidle: using governor menu Jan 20 01:47:42.926087 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:47:42.926101 kernel: dca service started, version 1.12.1 Jan 20 01:47:42.926112 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 20 01:47:42.926124 kernel: PCI: Using configuration type 1 for base access Jan 20 01:47:42.926135 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:47:42.926146 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:47:42.926157 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:47:42.926168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:47:42.926182 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:47:42.926193 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:47:42.926204 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:47:42.926215 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:47:42.926226 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:47:42.926237 kernel: ACPI: Interpreter enabled Jan 20 01:47:42.926248 kernel: ACPI: PM: (supports S0 S3 S5) Jan 20 01:47:42.926261 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:47:42.926274 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:47:42.926285 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:47:42.926295 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:47:42.926306 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:47:42.931867 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:47:42.932145 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 20 01:47:42.932467 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 20 01:47:42.932487 kernel: PCI host bridge to bus 0000:00 Jan 20 01:47:42.932766 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:47:42.932977 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:47:42.933182 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:47:42.933459 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 20 01:47:42.933729 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 20 01:47:42.946027 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 20 01:47:42.946270 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:47:42.946657 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 01:47:42.946918 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 20 01:47:42.947147 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 20 01:47:42.947428 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 20 01:47:42.955824 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 20 01:47:42.956111 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:47:42.956397 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 21484 usecs Jan 20 01:47:42.962578 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 20 01:47:42.962876 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 20 01:47:42.963109 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 20 01:47:42.963401 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 20 01:47:42.969024 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 01:47:42.969280 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 20 01:47:42.973291 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 20 01:47:42.977987 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 20 01:47:42.995753 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 01:47:42.996016 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 20 01:47:42.996255 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 20 01:47:42.996560 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 20 01:47:42.999944 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 20 01:47:43.000202 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 01:47:43.000505 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:47:43.005913 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 15625 usecs Jan 20 01:47:43.006159 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 01:47:43.006453 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 20 01:47:43.006714 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 20 01:47:43.006952 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 01:47:43.007182 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 20 01:47:43.007198 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:47:43.007209 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:47:43.007229 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:47:43.007240 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:47:43.007251 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:47:43.007262 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:47:43.007273 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:47:43.007283 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:47:43.007294 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:47:43.007309 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:47:43.007319 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:47:43.007331 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:47:43.007400 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:47:43.007412 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:47:43.007423 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:47:43.007434 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:47:43.007450 kernel: iommu: Default domain type: Translated Jan 20 01:47:43.007461 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:47:43.007472 kernel: efivars: Registered efivars operations Jan 20 01:47:43.007483 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:47:43.007493 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:47:43.007505 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 20 01:47:43.007515 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 20 01:47:43.007529 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 20 01:47:43.007539 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 20 01:47:43.007551 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 20 01:47:43.007562 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 20 01:47:43.007573 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 20 01:47:43.012670 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 20 01:47:43.012966 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:47:43.013218 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:47:43.013515 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:47:43.013535 kernel: vgaarb: loaded Jan 20 01:47:43.013548 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 20 01:47:43.013561 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 20 01:47:43.013573 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:47:43.018685 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:47:43.018712 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:47:43.018725 kernel: pnp: PnP ACPI init Jan 20 01:47:43.019014 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 20 01:47:43.019033 kernel: pnp: PnP ACPI: found 6 devices Jan 20 01:47:43.019045 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:47:43.019056 kernel: NET: Registered PF_INET protocol family Jan 20 01:47:43.019074 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:47:43.019085 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:47:43.019115 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:47:43.019129 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:47:43.019140 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:47:43.019151 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:47:43.019162 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:47:43.019176 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:47:43.019187 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:47:43.019200 kernel: NET: Registered PF_XDP protocol family Jan 20 01:47:43.019504 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 20 01:47:43.019801 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 20 01:47:43.020016 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:47:43.020233 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:47:43.020538 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:47:43.026878 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 20 01:47:43.027102 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 20 01:47:43.027313 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 20 01:47:43.027331 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:47:43.027402 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 20 01:47:43.027423 kernel: Initialise system trusted keyrings Jan 20 01:47:43.027435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:47:43.027447 kernel: Key type asymmetric registered Jan 20 01:47:43.027459 kernel: Asymmetric key parser 'x509' registered Jan 20 01:47:43.027471 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:47:43.027483 kernel: io scheduler mq-deadline registered Jan 20 01:47:43.027495 kernel: io scheduler kyber registered Jan 20 01:47:43.027509 kernel: io scheduler bfq registered Jan 20 01:47:43.027520 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:47:43.027533 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:47:43.027548 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:47:43.027559 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:47:43.027573 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:47:43.031647 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:47:43.031667 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:47:43.031682 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:47:43.031694 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:47:43.031705 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:47:43.031974 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 20 01:47:43.032205 kernel: rtc_cmos 00:04: registered as rtc0 Jan 20 01:47:43.032484 kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T01:47:29 UTC (1768873649) Jan 20 01:47:43.032747 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 20 01:47:43.032765 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 20 01:47:43.032779 kernel: efifb: probing for efifb Jan 20 01:47:43.032791 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 20 01:47:43.032809 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 20 01:47:43.032821 kernel: efifb: scrolling: redraw Jan 20 01:47:43.032833 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:47:43.032844 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 01:47:43.032857 kernel: fb0: EFI VGA frame buffer device Jan 20 01:47:43.032868 kernel: pstore: Using crash dump compression: deflate Jan 20 01:47:43.032880 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 01:47:43.032895 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:47:43.032906 kernel: Segment Routing with IPv6 Jan 20 01:47:43.032917 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:47:43.032929 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:47:43.032941 kernel: Key type dns_resolver registered Jan 20 01:47:43.032953 kernel: IPI shorthand broadcast: enabled Jan 20 01:47:43.032964 kernel: sched_clock: Marking stable (14205778602, 6567910267)->(24079139794, -3305450925) Jan 20 01:47:43.032976 kernel: registered taskstats version 1 Jan 20 01:47:43.032990 kernel: Loading compiled-in X.509 certificates Jan 20 01:47:43.033002 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: afdfbfc7519ef3fa38aa4389b822f24e81c62f9e' Jan 20 01:47:43.033013 kernel: Demotion targets for Node 0: null Jan 20 01:47:43.033024 kernel: Key type .fscrypt registered Jan 20 01:47:43.033035 kernel: Key type fscrypt-provisioning registered Jan 20 01:47:43.033048 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:47:43.033062 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:47:43.033073 kernel: ima: No architecture policies found Jan 20 01:47:43.033085 kernel: clk: Disabling unused clocks Jan 20 01:47:43.033096 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 20 01:47:43.033107 kernel: Write protecting the kernel read-only data: 45056k Jan 20 01:47:43.033119 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 20 01:47:43.033130 kernel: Run /init as init process Jan 20 01:47:43.033141 kernel: with arguments: Jan 20 01:47:43.033157 kernel: /init Jan 20 01:47:43.033168 kernel: with environment: Jan 20 01:47:43.033179 kernel: HOME=/ Jan 20 01:47:43.033190 kernel: TERM=linux Jan 20 01:47:43.033201 kernel: SCSI subsystem initialized Jan 20 01:47:43.033212 kernel: libata version 3.00 loaded. Jan 20 01:47:43.033516 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:47:43.033546 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:47:43.045003 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 01:47:43.045268 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 01:47:43.046980 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:47:43.047266 kernel: scsi host0: ahci Jan 20 01:47:43.050680 kernel: scsi host1: ahci Jan 20 01:47:43.051052 kernel: scsi host2: ahci Jan 20 01:47:43.051304 kernel: scsi host3: ahci Jan 20 01:47:43.056996 kernel: scsi host4: ahci Jan 20 01:47:43.057255 kernel: scsi host5: ahci Jan 20 01:47:43.057275 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Jan 20 01:47:43.057307 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Jan 20 01:47:43.057320 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Jan 20 01:47:43.057332 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Jan 20 01:47:43.057394 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Jan 20 01:47:43.057407 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Jan 20 01:47:43.057419 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:47:43.057431 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:47:43.057447 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:47:43.057458 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 20 01:47:43.057470 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:47:43.057481 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:47:43.057493 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:47:43.057506 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 20 01:47:43.057517 kernel: ata3.00: applying bridge limits Jan 20 01:47:43.057532 kernel: ata3.00: LPM support broken, forcing max_power Jan 20 01:47:43.057544 kernel: ata3.00: configured for UDMA/100 Jan 20 01:47:43.057879 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 01:47:43.058171 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 01:47:43.058475 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Jan 20 01:47:43.064992 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 20 01:47:43.065033 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:47:43.065045 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:47:43.065057 kernel: GPT:16515071 != 27000831 Jan 20 01:47:43.065069 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:47:43.065080 kernel: GPT:16515071 != 27000831 Jan 20 01:47:43.065091 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:47:43.065102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:47:43.068906 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 20 01:47:43.068933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:47:43.068946 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:47:43.068958 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 01:47:43.068970 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 01:47:43.068982 kernel: raid6: avx2x4 gen() 10425 MB/s Jan 20 01:47:43.068994 kernel: raid6: avx2x2 gen() 10272 MB/s Jan 20 01:47:43.069014 kernel: raid6: avx2x1 gen() 760 MB/s Jan 20 01:47:43.069025 kernel: raid6: using algorithm avx2x4 gen() 10425 MB/s Jan 20 01:47:43.069037 kernel: raid6: .... xor() 3075 MB/s, rmw enabled Jan 20 01:47:43.069048 kernel: raid6: using avx2x2 recovery algorithm Jan 20 01:47:43.069060 kernel: xor: automatically using best checksumming function avx Jan 20 01:47:43.069071 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:47:43.069083 kernel: BTRFS: device fsid ca982954-e818-4158-83b7-102f75baa62c devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (182) Jan 20 01:47:43.069097 kernel: BTRFS info (device dm-0): first mount of filesystem ca982954-e818-4158-83b7-102f75baa62c Jan 20 01:47:43.069109 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:47:43.069121 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:47:43.069133 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 01:47:43.069144 kernel: loop: module loaded Jan 20 01:47:43.069156 kernel: loop0: detected capacity change from 0 to 100160 Jan 20 01:47:43.069167 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:47:43.069181 kernel: hrtimer: interrupt took 16205547 ns Jan 20 01:47:43.069195 systemd[1]: Successfully made /usr/ read-only. Jan 20 01:47:43.069211 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:47:43.069225 systemd[1]: Detected virtualization kvm. Jan 20 01:47:43.069236 systemd[1]: Detected architecture x86-64. Jan 20 01:47:43.069248 systemd[1]: Running in initrd. Jan 20 01:47:43.069262 systemd[1]: No hostname configured, using default hostname. Jan 20 01:47:43.069275 systemd[1]: Hostname set to . Jan 20 01:47:43.069289 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 01:47:43.069302 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:47:43.069314 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:47:43.069325 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:47:43.069467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:47:43.069488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:47:43.069500 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:47:43.069512 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:47:43.069524 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:47:43.069535 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:47:43.069551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:47:43.069563 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:47:43.069575 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:47:43.073850 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:47:43.073869 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:47:43.073881 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:47:43.073897 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:47:43.073915 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:47:43.073928 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:47:43.073940 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:47:43.073953 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 01:47:43.073964 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:47:43.073976 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:47:43.073988 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:47:43.074004 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:47:43.074016 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:47:43.074028 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:47:43.074041 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:47:43.074052 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:47:43.074065 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 01:47:43.074080 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:47:43.074092 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:47:43.074104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:47:43.074117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:47:43.074132 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:47:43.074144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:47:43.074156 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:47:43.074219 systemd-journald[320]: Collecting audit messages is enabled. Jan 20 01:47:43.074253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:47:43.074266 systemd-journald[320]: Journal started Jan 20 01:47:43.074326 systemd-journald[320]: Runtime Journal (/run/log/journal/4f130ae7e7d947559385b29ad5daf1bf) is 6M, max 48.1M, 42M free. Jan 20 01:47:43.120722 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:47:43.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.266806 kernel: audit: type=1130 audit(1768873663.221:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.342769 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:47:43.460778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:47:43.552857 kernel: audit: type=1130 audit(1768873663.502:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.512165 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:47:43.664330 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 01:47:43.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.724988 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:47:43.820228 kernel: audit: type=1130 audit(1768873663.724:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.811715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:47:43.885148 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:47:43.957853 kernel: audit: type=1130 audit(1768873663.913:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:43.962037 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:47:44.065943 kernel: audit: type=1130 audit(1768873664.005:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:44.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:44.087177 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:47:44.156546 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:47:44.255189 kernel: audit: type=1130 audit(1768873664.191:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:44.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:44.295468 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=62ba7e41f0b11e3efb63e9a03ee9de0b370deb0ea547dd39e8d3060b03ecf9e8 Jan 20 01:47:44.856772 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:47:44.940240 kernel: Bridge firewalling registered Jan 20 01:47:44.941926 systemd-modules-load[321]: Inserted module 'br_netfilter' Jan 20 01:47:44.963990 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:47:45.087274 kernel: audit: type=1130 audit(1768873664.989:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:44.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:45.018926 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:47:45.305403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:47:45.403177 kernel: audit: type=1130 audit(1768873665.324:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:45.403228 kernel: audit: type=1334 audit(1768873665.324:10): prog-id=6 op=LOAD Jan 20 01:47:45.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:45.324000 audit: BPF prog-id=6 op=LOAD Jan 20 01:47:45.347946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:47:45.530019 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:47:45.963120 systemd-resolved[450]: Positive Trust Anchors: Jan 20 01:47:45.963163 systemd-resolved[450]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:47:45.963169 systemd-resolved[450]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 01:47:45.963209 systemd-resolved[450]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:47:46.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:46.140857 systemd-resolved[450]: Defaulting to hostname 'linux'. Jan 20 01:47:46.261068 kernel: audit: type=1130 audit(1768873666.200:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:46.261110 kernel: iscsi: registered transport (tcp) Jan 20 01:47:46.146124 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:47:46.205400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:47:46.440801 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:47:46.440891 kernel: QLogic iSCSI HBA Driver Jan 20 01:47:46.822210 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:47:47.072232 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:47:47.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:47.193036 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:47:48.203784 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:47:48.333243 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 01:47:48.333287 kernel: audit: type=1130 audit(1768873668.241:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:48.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:48.303967 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:47:48.414572 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:47:49.018219 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:47:49.141844 kernel: audit: type=1130 audit(1768873669.034:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:49.141881 kernel: audit: type=1334 audit(1768873669.064:15): prog-id=7 op=LOAD Jan 20 01:47:49.141899 kernel: audit: type=1334 audit(1768873669.064:16): prog-id=8 op=LOAD Jan 20 01:47:49.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:49.064000 audit: BPF prog-id=7 op=LOAD Jan 20 01:47:49.064000 audit: BPF prog-id=8 op=LOAD Jan 20 01:47:49.099978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:47:49.534197 systemd-udevd[608]: Using default interface naming scheme 'v257'. Jan 20 01:47:49.740283 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:47:49.859978 kernel: audit: type=1130 audit(1768873669.758:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:49.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:49.802006 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:47:50.063861 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Jan 20 01:47:50.319788 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:47:50.470905 kernel: audit: type=1130 audit(1768873670.344:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:50.470948 kernel: audit: type=1334 audit(1768873670.349:19): prog-id=9 op=LOAD Jan 20 01:47:50.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:50.349000 audit: BPF prog-id=9 op=LOAD Jan 20 01:47:50.399021 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:47:50.624821 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:47:50.710736 kernel: audit: type=1130 audit(1768873670.650:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:50.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:50.660946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:47:51.027878 systemd-networkd[721]: lo: Link UP Jan 20 01:47:51.031279 systemd-networkd[721]: lo: Gained carrier Jan 20 01:47:51.194694 kernel: audit: type=1130 audit(1768873671.102:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:51.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:51.060767 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:47:51.102565 systemd[1]: Reached target network.target - Network. Jan 20 01:47:51.524706 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:47:51.698321 kernel: audit: type=1130 audit(1768873671.549:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:51.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:51.698095 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:47:52.424748 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:47:52.816487 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:47:52.910257 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:47:53.140232 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:47:53.335310 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:47:53.429417 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:47:53.439854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:47:53.440154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:47:53.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:53.526538 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:47:53.562309 kernel: audit: type=1131 audit(1768873673.526:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:53.603233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:47:53.691740 disk-uuid[776]: Primary Header is updated. Jan 20 01:47:53.691740 disk-uuid[776]: Secondary Entries is updated. Jan 20 01:47:53.691740 disk-uuid[776]: Secondary Header is updated. Jan 20 01:47:54.110387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:47:54.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:54.156531 kernel: audit: type=1130 audit(1768873674.125:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:54.293636 systemd-networkd[721]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:47:54.297122 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:47:54.317052 systemd-networkd[721]: eth0: Link UP Jan 20 01:47:54.323135 systemd-networkd[721]: eth0: Gained carrier Jan 20 01:47:54.323160 systemd-networkd[721]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:47:54.526027 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 01:47:54.606085 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:47:54.902536 kernel: AES CTR mode by8 optimization enabled Jan 20 01:47:55.225169 disk-uuid[777]: Warning: The kernel is still using the old partition table. Jan 20 01:47:55.225169 disk-uuid[777]: The new table will be used at the next reboot or after you Jan 20 01:47:55.225169 disk-uuid[777]: run partprobe(8) or kpartx(8) Jan 20 01:47:55.225169 disk-uuid[777]: The operation has completed successfully. Jan 20 01:47:55.336678 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:47:55.337284 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:47:55.522997 kernel: audit: type=1130 audit(1768873675.407:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:55.523074 kernel: audit: type=1131 audit(1768873675.410:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:55.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:55.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:55.413966 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:47:55.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:55.573555 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:47:55.647190 kernel: audit: type=1130 audit(1768873675.551:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:55.658797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:47:55.726610 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:47:55.761843 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:47:55.843134 systemd-networkd[721]: eth0: Gained IPv6LL Jan 20 01:47:55.891229 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:47:56.031162 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:47:56.136944 kernel: audit: type=1130 audit(1768873676.055:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:56.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:56.220275 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Jan 20 01:47:56.232485 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:47:56.232547 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:47:56.258481 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:47:56.258557 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:47:56.311287 kernel: BTRFS info (device vda6): last unmount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:47:56.360452 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:47:56.434558 kernel: audit: type=1130 audit(1768873676.377:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:56.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:56.405020 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:47:57.552556 ignition[890]: Ignition 2.22.0 Jan 20 01:47:57.553992 ignition[890]: Stage: fetch-offline Jan 20 01:47:57.555295 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:47:57.555316 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:47:57.555569 ignition[890]: parsed url from cmdline: "" Jan 20 01:47:57.555575 ignition[890]: no config URL provided Jan 20 01:47:57.555587 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:47:57.555605 ignition[890]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:47:57.558785 ignition[890]: op(1): [started] loading QEMU firmware config module Jan 20 01:47:57.558807 ignition[890]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 20 01:47:57.745018 ignition[890]: op(1): [finished] loading QEMU firmware config module Jan 20 01:47:58.742562 ignition[890]: parsing config with SHA512: 1ecc09f38a08314744d85aa0309e52eafa7db97619ebf279d18aeb58b4d568525d89a6f05d6302d247b2db502bcb6c5376be807831a4305f22483aef43d20b78 Jan 20 01:47:58.901486 unknown[890]: fetched base config from "system" Jan 20 01:47:58.901503 unknown[890]: fetched user config from "qemu" Jan 20 01:47:58.904296 ignition[890]: fetch-offline: fetch-offline passed Jan 20 01:47:58.915999 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:47:58.904559 ignition[890]: Ignition finished successfully Jan 20 01:47:59.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:59.037724 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 20 01:47:59.126152 kernel: audit: type=1130 audit(1768873679.031:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:47:59.062296 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:48:00.474009 ignition[900]: Ignition 2.22.0 Jan 20 01:48:00.474153 ignition[900]: Stage: kargs Jan 20 01:48:00.474588 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:48:00.474605 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:48:00.540682 ignition[900]: kargs: kargs passed Jan 20 01:48:00.540869 ignition[900]: Ignition finished successfully Jan 20 01:48:00.553995 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:48:00.662535 kernel: audit: type=1130 audit(1768873680.610:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:00.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:00.617674 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:48:01.231999 ignition[908]: Ignition 2.22.0 Jan 20 01:48:01.233180 ignition[908]: Stage: disks Jan 20 01:48:01.243971 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:48:01.243993 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:48:01.245496 ignition[908]: disks: disks passed Jan 20 01:48:01.245582 ignition[908]: Ignition finished successfully Jan 20 01:48:01.310673 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:48:01.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:01.345719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:48:01.512508 kernel: audit: type=1130 audit(1768873681.329:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:01.394530 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:48:01.409935 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:48:01.418632 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:48:01.418722 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:48:01.437108 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:48:02.122143 systemd-fsck[919]: ROOT: clean, 15/456736 files, 38230/456704 blocks Jan 20 01:48:02.215131 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:48:02.318725 kernel: audit: type=1130 audit(1768873682.234:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:02.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:02.252992 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:48:03.304651 kernel: EXT4-fs (vda9): mounted filesystem dbcb8eb1-a16c-4a1a-8ee4-d933bd0ee436 r/w with ordered data mode. Quota mode: none. Jan 20 01:48:03.315325 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:48:03.323412 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:48:03.354762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:48:03.443493 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:48:03.465499 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:48:03.465573 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:48:03.465640 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:48:03.705768 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (928) Jan 20 01:48:03.646196 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:48:03.726088 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:48:03.763625 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:48:03.763664 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:48:03.801423 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:48:03.801499 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:48:03.804151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:48:04.118747 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:48:04.140507 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:48:04.187476 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:48:04.229764 initrd-setup-root[973]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:48:04.984603 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:48:05.040133 kernel: audit: type=1130 audit(1768873684.995:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:04.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:05.000601 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:48:05.088144 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:48:05.195696 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:48:05.234925 kernel: BTRFS info (device vda6): last unmount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:48:05.511631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:48:05.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:05.559870 kernel: audit: type=1130 audit(1768873685.529:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:05.638415 ignition[1041]: INFO : Ignition 2.22.0 Jan 20 01:48:05.638415 ignition[1041]: INFO : Stage: mount Jan 20 01:48:05.697092 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:48:05.697092 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:48:05.697092 ignition[1041]: INFO : mount: mount passed Jan 20 01:48:05.697092 ignition[1041]: INFO : Ignition finished successfully Jan 20 01:48:05.827788 kernel: audit: type=1130 audit(1768873685.756:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:05.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:05.751989 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:48:05.764032 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:48:05.951146 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:48:06.071172 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1054) Jan 20 01:48:06.104435 kernel: BTRFS info (device vda6): first mount of filesystem dd813f25-deee-45c4-bcdc-1fa4787873d8 Jan 20 01:48:06.104531 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:48:06.154025 kernel: BTRFS info (device vda6): turning on async discard Jan 20 01:48:06.154114 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 01:48:06.169107 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:48:06.998257 ignition[1072]: INFO : Ignition 2.22.0 Jan 20 01:48:06.998257 ignition[1072]: INFO : Stage: files Jan 20 01:48:07.023942 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:48:07.023942 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:48:07.023942 ignition[1072]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:48:07.023942 ignition[1072]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:48:07.023942 ignition[1072]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:48:07.140398 ignition[1072]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:48:07.140398 ignition[1072]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:48:07.140398 ignition[1072]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:48:07.140398 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:48:07.140398 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 01:48:07.043108 unknown[1072]: wrote ssh authorized keys file for user: core Jan 20 01:48:07.430018 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:48:08.010641 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 01:48:08.010641 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:48:08.077619 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 20 01:48:08.889969 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:48:11.845968 ignition[1072]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 20 01:48:11.845968 ignition[1072]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 01:48:11.867045 ignition[1072]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 20 01:48:12.016479 ignition[1072]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:48:12.054032 ignition[1072]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 20 01:48:12.054032 ignition[1072]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 20 01:48:12.054032 ignition[1072]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:48:12.054032 ignition[1072]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:48:12.054032 ignition[1072]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:48:12.054032 ignition[1072]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:48:12.054032 ignition[1072]: INFO : files: files passed Jan 20 01:48:12.054032 ignition[1072]: INFO : Ignition finished successfully Jan 20 01:48:12.228135 kernel: audit: type=1130 audit(1768873692.163:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.119867 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:48:12.183150 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:48:12.275938 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:48:12.330241 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:48:12.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.364000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.330468 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:48:12.454910 kernel: audit: type=1130 audit(1768873692.364:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.454951 kernel: audit: type=1131 audit(1768873692.364:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.454971 initrd-setup-root-after-ignition[1101]: grep: /sysroot/oem/oem-release: No such file or directory Jan 20 01:48:12.450498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:48:12.612971 kernel: audit: type=1130 audit(1768873692.516:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:12.613065 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:48:12.613065 initrd-setup-root-after-ignition[1104]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:48:12.518797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:48:12.693410 initrd-setup-root-after-ignition[1108]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:48:12.602492 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:48:13.098147 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:48:13.217323 kernel: audit: type=1130 audit(1768873693.118:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:13.217423 kernel: audit: type=1131 audit(1768873693.118:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:13.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:13.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:13.098398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:48:13.126599 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:48:13.189863 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:48:13.331966 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:48:13.373237 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:48:13.635100 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:48:13.728022 kernel: audit: type=1130 audit(1768873693.694:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:13.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:13.737788 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:48:13.994530 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:48:14.006957 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:48:14.032022 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:48:14.047286 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:48:14.057972 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:48:14.060274 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:48:14.124572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:48:14.281868 kernel: audit: type=1131 audit(1768873694.123:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:14.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:14.148100 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:48:14.152187 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:48:14.164237 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:48:14.203163 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:48:14.214786 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 01:48:14.232182 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:48:14.238219 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:48:14.252519 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:48:14.265623 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:48:14.352281 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:48:14.384833 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:48:14.391810 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:48:14.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:14.433187 kernel: audit: type=1131 audit(1768873694.413:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:14.436576 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:48:14.453661 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:48:14.495110 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:48:14.519837 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:48:14.576879 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:48:14.584852 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:48:14.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:14.660148 kernel: audit: type=1131 audit(1768873694.635:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:14.657219 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:48:14.664762 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:48:14.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:14.727746 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:48:14.762713 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:48:14.780666 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:48:14.813139 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:48:14.839464 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:48:14.909434 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:48:14.909677 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:48:14.944526 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:48:14.950681 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:48:15.012435 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 01:48:15.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.012626 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:48:15.025072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:48:15.025703 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:48:15.058821 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:48:15.059074 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:48:15.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.231419 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:48:15.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.242429 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:48:15.242720 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:48:15.316552 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:48:15.335588 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:48:15.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.339713 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:48:15.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.340613 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:48:15.340806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:48:15.349275 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:48:15.349546 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:48:15.396553 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:48:15.396810 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:48:15.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.591583 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:48:15.630435 ignition[1128]: INFO : Ignition 2.22.0 Jan 20 01:48:15.630435 ignition[1128]: INFO : Stage: umount Jan 20 01:48:15.650564 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:48:15.650564 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 20 01:48:15.650564 ignition[1128]: INFO : umount: umount passed Jan 20 01:48:15.650564 ignition[1128]: INFO : Ignition finished successfully Jan 20 01:48:15.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.637198 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:48:15.637464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:48:15.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.690844 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:48:15.693503 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:48:15.821265 systemd[1]: Stopped target network.target - Network. Jan 20 01:48:15.831001 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:48:15.831144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:48:15.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.884169 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:48:15.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.884314 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:48:15.905151 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:48:15.905281 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:48:15.913997 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:48:15.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:15.914116 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:48:15.914240 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:48:15.914330 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:48:15.914634 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:48:16.008000 audit: BPF prog-id=6 op=UNLOAD Jan 20 01:48:15.914768 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:48:15.961245 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:48:15.961498 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:48:16.027237 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:48:16.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.027544 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:48:16.100979 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 01:48:16.100000 audit: BPF prog-id=9 op=UNLOAD Jan 20 01:48:16.113750 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:48:16.113965 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:48:16.171499 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:48:16.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.185545 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:48:16.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.185682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:48:16.200665 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:48:16.200764 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:48:16.218900 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:48:16.219274 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:48:16.271904 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:48:16.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.323228 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:48:16.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.323591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:48:16.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.336285 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:48:16.336443 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:48:16.352001 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:48:16.352091 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:48:16.364130 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:48:16.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.364285 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:48:16.379486 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:48:16.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.379588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:48:16.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.389841 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:48:16.390032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:48:16.424794 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:48:16.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.441984 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 01:48:16.442109 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:48:16.492556 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:48:16.492786 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:48:16.560020 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:48:16.560126 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:48:16.631486 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:48:16.633047 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:48:16.795558 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:48:16.801577 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:48:16.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:16.814166 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:48:16.834183 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:48:16.923303 systemd[1]: Switching root. Jan 20 01:48:17.025989 systemd-journald[320]: Journal stopped Jan 20 01:48:23.189636 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Jan 20 01:48:23.189763 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:48:23.189791 kernel: SELinux: policy capability open_perms=1 Jan 20 01:48:23.189826 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:48:23.189859 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:48:23.189887 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:48:23.189915 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:48:23.189935 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:48:23.189954 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:48:23.189974 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 01:48:23.198054 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 01:48:23.198099 kernel: audit: type=1403 audit(1768873697.825:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:48:23.198133 systemd[1]: Successfully loaded SELinux policy in 290.230ms. Jan 20 01:48:23.198168 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 30.318ms. Jan 20 01:48:23.198192 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 01:48:23.198216 systemd[1]: Detected virtualization kvm. Jan 20 01:48:23.198236 systemd[1]: Detected architecture x86-64. Jan 20 01:48:23.198260 systemd[1]: Detected first boot. Jan 20 01:48:23.198282 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 01:48:23.198302 kernel: audit: type=1334 audit(1768873698.147:82): prog-id=10 op=LOAD Jan 20 01:48:23.208876 kernel: audit: type=1334 audit(1768873698.147:83): prog-id=10 op=UNLOAD Jan 20 01:48:23.208925 kernel: audit: type=1334 audit(1768873698.147:84): prog-id=11 op=LOAD Jan 20 01:48:23.208943 kernel: audit: type=1334 audit(1768873698.147:85): prog-id=11 op=UNLOAD Jan 20 01:48:23.208979 zram_generator::config[1173]: No configuration found. Jan 20 01:48:23.209042 kernel: Guest personality initialized and is inactive Jan 20 01:48:23.209061 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 01:48:23.209078 kernel: Initialized host personality Jan 20 01:48:23.209095 kernel: NET: Registered PF_VSOCK protocol family Jan 20 01:48:23.209116 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:48:23.209137 kernel: audit: type=1334 audit(1768873701.137:86): prog-id=12 op=LOAD Jan 20 01:48:23.209153 kernel: audit: type=1334 audit(1768873701.137:87): prog-id=3 op=UNLOAD Jan 20 01:48:23.209170 kernel: audit: type=1334 audit(1768873701.137:88): prog-id=13 op=LOAD Jan 20 01:48:23.209186 kernel: audit: type=1334 audit(1768873701.137:89): prog-id=14 op=LOAD Jan 20 01:48:23.209203 kernel: audit: type=1334 audit(1768873701.137:90): prog-id=4 op=UNLOAD Jan 20 01:48:23.209221 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:48:23.209242 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:48:23.209261 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:48:23.209287 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:48:23.209306 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:48:23.209324 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:48:23.209395 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:48:23.209468 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:48:23.209488 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:48:23.209506 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:48:23.209524 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:48:23.209543 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:48:23.209562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:48:23.209581 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:48:23.209602 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:48:23.209622 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:48:23.209640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:48:23.209658 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:48:23.209676 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:48:23.209695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:48:23.211601 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:48:23.211633 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:48:23.211651 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:48:23.211670 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:48:23.211688 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:48:23.211706 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:48:23.211728 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 01:48:23.211749 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:48:23.211825 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:48:23.211849 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:48:23.211869 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:48:23.211892 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 01:48:23.211913 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 01:48:23.211933 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 01:48:23.211954 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:48:23.214091 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 01:48:23.214129 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 01:48:23.214154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:48:23.214585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:48:23.214618 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:48:23.214642 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:48:23.214662 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:48:23.214692 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:48:23.214745 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:48:23.214769 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:48:23.214793 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:48:23.214813 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:48:23.214836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:48:23.214857 systemd[1]: Reached target machines.target - Containers. Jan 20 01:48:23.214928 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:48:23.214951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:48:23.214973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:48:23.215026 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:48:23.215045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:48:23.215067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:48:23.215090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:48:23.215108 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:48:23.215126 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:48:23.215143 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:48:23.215160 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:48:23.215177 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:48:23.215195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:48:23.215215 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:48:23.215234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:48:23.215251 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:48:23.215269 kernel: audit: type=1334 audit(1768873702.830:103): prog-id=17 op=LOAD Jan 20 01:48:23.215289 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:48:23.215306 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:48:23.215323 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:48:23.220422 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:48:23.220492 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 01:48:23.220563 systemd-journald[1259]: Collecting audit messages is enabled. Jan 20 01:48:23.220606 kernel: fuse: init (API version 7.41) Jan 20 01:48:23.220625 kernel: audit: type=1305 audit(1768873703.169:104): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 01:48:23.220645 kernel: audit: type=1300 audit(1768873703.169:104): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd1e0a0570 a2=4000 a3=0 items=0 ppid=1 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:23.220662 kernel: audit: type=1327 audit(1768873703.169:104): proctitle="/usr/lib/systemd/systemd-journald" Jan 20 01:48:23.220681 systemd-journald[1259]: Journal started Jan 20 01:48:23.227237 systemd-journald[1259]: Runtime Journal (/run/log/journal/4f130ae7e7d947559385b29ad5daf1bf) is 6M, max 48.1M, 42M free. Jan 20 01:48:23.227323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:48:21.718000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 01:48:22.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:22.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:22.799000 audit: BPF prog-id=14 op=UNLOAD Jan 20 01:48:22.799000 audit: BPF prog-id=13 op=UNLOAD Jan 20 01:48:22.804000 audit: BPF prog-id=15 op=LOAD Jan 20 01:48:22.814000 audit: BPF prog-id=16 op=LOAD Jan 20 01:48:22.830000 audit: BPF prog-id=17 op=LOAD Jan 20 01:48:23.169000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 01:48:23.169000 audit[1259]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd1e0a0570 a2=4000 a3=0 items=0 ppid=1 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:23.169000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 01:48:21.104262 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:48:21.138890 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:48:21.145670 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:48:21.146498 systemd[1]: systemd-journald.service: Consumed 2.387s CPU time. Jan 20 01:48:23.292443 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:48:23.319415 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:48:23.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.351238 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:48:23.373708 kernel: audit: type=1130 audit(1768873703.335:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.404916 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:48:23.426297 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:48:23.442309 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:48:23.471814 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:48:23.489548 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:48:23.499866 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:48:23.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.530412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:48:23.584736 kernel: audit: type=1130 audit(1768873703.527:106): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.585614 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:48:23.585917 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:48:23.609374 kernel: audit: type=1130 audit(1768873703.583:107): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.611183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:48:23.611627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:48:23.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.672880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:48:23.673256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:48:23.679099 kernel: audit: type=1130 audit(1768873703.609:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.679171 kernel: audit: type=1131 audit(1768873703.609:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.679196 kernel: audit: type=1130 audit(1768873703.664:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.892127 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:48:23.892698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:48:23.905879 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:48:23.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.954833 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:48:23.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.963520 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:48:23.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:23.994201 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 01:48:24.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.018499 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:48:24.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.093522 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:48:24.101437 kernel: ACPI: bus type drm_connector registered Jan 20 01:48:24.122058 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 01:48:24.153167 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:48:24.171460 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:48:24.171530 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:48:24.178050 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 01:48:24.206265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:48:24.206583 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:48:24.240623 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:48:24.259325 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:48:24.265557 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:48:24.269755 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:48:24.294675 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:48:24.300325 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:48:24.318569 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:48:24.349549 systemd-journald[1259]: Time spent on flushing to /var/log/journal/4f130ae7e7d947559385b29ad5daf1bf is 165.030ms for 1220 entries. Jan 20 01:48:24.349549 systemd-journald[1259]: System Journal (/var/log/journal/4f130ae7e7d947559385b29ad5daf1bf) is 8M, max 163.5M, 155.5M free. Jan 20 01:48:24.598394 systemd-journald[1259]: Received client request to flush runtime journal. Jan 20 01:48:24.598497 kernel: loop1: detected capacity change from 0 to 111544 Jan 20 01:48:24.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.368606 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:48:24.395317 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:48:24.395619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:48:24.409860 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:48:24.410211 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:48:24.427723 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:48:24.488654 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:48:24.518715 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:48:24.572085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:48:24.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.620804 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:48:24.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.640911 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:48:24.690987 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:48:24.730595 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 01:48:24.803783 kernel: loop2: detected capacity change from 0 to 219144 Jan 20 01:48:24.805107 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:48:24.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:24.844000 audit: BPF prog-id=18 op=LOAD Jan 20 01:48:24.844000 audit: BPF prog-id=19 op=LOAD Jan 20 01:48:24.844000 audit: BPF prog-id=20 op=LOAD Jan 20 01:48:24.851449 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 01:48:24.880000 audit: BPF prog-id=21 op=LOAD Jan 20 01:48:24.890182 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:48:24.923563 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:48:24.960000 audit: BPF prog-id=22 op=LOAD Jan 20 01:48:24.960000 audit: BPF prog-id=23 op=LOAD Jan 20 01:48:24.960000 audit: BPF prog-id=24 op=LOAD Jan 20 01:48:24.972621 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:48:24.989000 audit: BPF prog-id=25 op=LOAD Jan 20 01:48:24.989000 audit: BPF prog-id=26 op=LOAD Jan 20 01:48:24.989000 audit: BPF prog-id=27 op=LOAD Jan 20 01:48:24.998588 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 01:48:25.019226 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:48:25.047442 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 01:48:25.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:25.077795 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 20 01:48:25.077820 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Jan 20 01:48:25.104525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:48:25.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:25.156051 kernel: loop3: detected capacity change from 0 to 119256 Jan 20 01:48:25.285498 systemd-nsresourced[1315]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 01:48:25.289128 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 01:48:25.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:25.342117 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:48:25.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:25.365074 kernel: loop4: detected capacity change from 0 to 111544 Jan 20 01:48:27.009862 kernel: loop5: detected capacity change from 0 to 219144 Jan 20 01:48:27.337484 kernel: loop6: detected capacity change from 0 to 119256 Jan 20 01:48:27.463897 (sd-merge)[1327]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Jan 20 01:48:27.618486 (sd-merge)[1327]: Merged extensions into '/usr'. Jan 20 01:48:27.657915 systemd-oomd[1310]: No swap; memory pressure usage will be degraded Jan 20 01:48:27.660459 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 01:48:27.721549 systemd-resolved[1311]: Positive Trust Anchors: Jan 20 01:48:27.721573 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:48:27.721579 systemd-resolved[1311]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 01:48:27.721620 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:48:27.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:27.750461 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:48:27.750513 systemd[1]: Reloading... Jan 20 01:48:27.811925 systemd-resolved[1311]: Defaulting to hostname 'linux'. Jan 20 01:48:28.191304 zram_generator::config[1363]: No configuration found. Jan 20 01:48:29.695588 systemd[1]: Reloading finished in 1938 ms. Jan 20 01:48:29.774916 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:48:29.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:29.821529 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 20 01:48:29.821652 kernel: audit: type=1130 audit(1768873709.804:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:29.818690 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:48:29.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:29.850183 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:48:29.866677 kernel: audit: type=1130 audit(1768873709.847:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:29.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:29.909210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:48:29.912556 kernel: audit: type=1130 audit(1768873709.883:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:29.942630 systemd[1]: Starting ensure-sysext.service... Jan 20 01:48:29.974912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:48:29.998000 audit: BPF prog-id=8 op=UNLOAD Jan 20 01:48:29.998000 audit: BPF prog-id=7 op=UNLOAD Jan 20 01:48:30.001000 audit: BPF prog-id=28 op=LOAD Jan 20 01:48:30.001000 audit: BPF prog-id=29 op=LOAD Jan 20 01:48:30.008961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:48:30.014537 kernel: audit: type=1334 audit(1768873709.998:147): prog-id=8 op=UNLOAD Jan 20 01:48:30.014602 kernel: audit: type=1334 audit(1768873709.998:148): prog-id=7 op=UNLOAD Jan 20 01:48:30.014634 kernel: audit: type=1334 audit(1768873710.001:149): prog-id=28 op=LOAD Jan 20 01:48:30.014658 kernel: audit: type=1334 audit(1768873710.001:150): prog-id=29 op=LOAD Jan 20 01:48:30.088000 audit: BPF prog-id=30 op=LOAD Jan 20 01:48:30.108902 kernel: audit: type=1334 audit(1768873710.088:151): prog-id=30 op=LOAD Jan 20 01:48:30.088000 audit: BPF prog-id=15 op=UNLOAD Jan 20 01:48:30.137225 kernel: audit: type=1334 audit(1768873710.088:152): prog-id=15 op=UNLOAD Jan 20 01:48:30.088000 audit: BPF prog-id=31 op=LOAD Jan 20 01:48:30.161821 kernel: audit: type=1334 audit(1768873710.088:153): prog-id=31 op=LOAD Jan 20 01:48:30.088000 audit: BPF prog-id=32 op=LOAD Jan 20 01:48:30.088000 audit: BPF prog-id=16 op=UNLOAD Jan 20 01:48:30.088000 audit: BPF prog-id=17 op=UNLOAD Jan 20 01:48:30.209000 audit: BPF prog-id=33 op=LOAD Jan 20 01:48:30.211000 audit: BPF prog-id=21 op=UNLOAD Jan 20 01:48:30.225000 audit: BPF prog-id=34 op=LOAD Jan 20 01:48:30.233000 audit: BPF prog-id=25 op=UNLOAD Jan 20 01:48:30.233000 audit: BPF prog-id=35 op=LOAD Jan 20 01:48:30.233000 audit: BPF prog-id=36 op=LOAD Jan 20 01:48:30.239000 audit: BPF prog-id=26 op=UNLOAD Jan 20 01:48:30.239000 audit: BPF prog-id=27 op=UNLOAD Jan 20 01:48:30.271000 audit: BPF prog-id=37 op=LOAD Jan 20 01:48:30.280000 audit: BPF prog-id=18 op=UNLOAD Jan 20 01:48:30.282000 audit: BPF prog-id=38 op=LOAD Jan 20 01:48:30.282000 audit: BPF prog-id=39 op=LOAD Jan 20 01:48:30.282000 audit: BPF prog-id=19 op=UNLOAD Jan 20 01:48:30.282000 audit: BPF prog-id=20 op=UNLOAD Jan 20 01:48:30.337000 audit: BPF prog-id=40 op=LOAD Jan 20 01:48:30.337000 audit: BPF prog-id=22 op=UNLOAD Jan 20 01:48:30.337000 audit: BPF prog-id=41 op=LOAD Jan 20 01:48:30.337000 audit: BPF prog-id=42 op=LOAD Jan 20 01:48:30.337000 audit: BPF prog-id=23 op=UNLOAD Jan 20 01:48:30.337000 audit: BPF prog-id=24 op=UNLOAD Jan 20 01:48:30.622286 systemd[1]: Reload requested from client PID 1401 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:48:30.627493 systemd[1]: Reloading... Jan 20 01:48:30.640008 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 01:48:30.646946 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 01:48:30.647737 systemd-tmpfiles[1402]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:48:30.714666 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jan 20 01:48:30.714817 systemd-tmpfiles[1402]: ACLs are not supported, ignoring. Jan 20 01:48:30.723678 systemd-udevd[1403]: Using default interface naming scheme 'v257'. Jan 20 01:48:30.823707 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:48:30.823727 systemd-tmpfiles[1402]: Skipping /boot Jan 20 01:48:30.948796 systemd-tmpfiles[1402]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:48:30.950536 systemd-tmpfiles[1402]: Skipping /boot Jan 20 01:48:31.381157 zram_generator::config[1438]: No configuration found. Jan 20 01:48:32.181418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 01:48:32.206397 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:48:32.218667 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:48:32.290640 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 20 01:48:32.291461 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:48:32.291911 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:48:32.791254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:48:32.818395 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:48:32.818514 systemd[1]: Reloading finished in 2190 ms. Jan 20 01:48:32.876455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:48:32.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:32.905729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:48:32.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:32.934000 audit: BPF prog-id=43 op=LOAD Jan 20 01:48:32.969000 audit: BPF prog-id=33 op=UNLOAD Jan 20 01:48:32.971000 audit: BPF prog-id=44 op=LOAD Jan 20 01:48:32.971000 audit: BPF prog-id=34 op=UNLOAD Jan 20 01:48:32.971000 audit: BPF prog-id=45 op=LOAD Jan 20 01:48:32.971000 audit: BPF prog-id=46 op=LOAD Jan 20 01:48:32.972000 audit: BPF prog-id=35 op=UNLOAD Jan 20 01:48:32.972000 audit: BPF prog-id=36 op=UNLOAD Jan 20 01:48:32.981000 audit: BPF prog-id=47 op=LOAD Jan 20 01:48:32.981000 audit: BPF prog-id=37 op=UNLOAD Jan 20 01:48:32.981000 audit: BPF prog-id=48 op=LOAD Jan 20 01:48:32.981000 audit: BPF prog-id=49 op=LOAD Jan 20 01:48:32.982000 audit: BPF prog-id=38 op=UNLOAD Jan 20 01:48:32.983000 audit: BPF prog-id=39 op=UNLOAD Jan 20 01:48:32.984000 audit: BPF prog-id=50 op=LOAD Jan 20 01:48:32.984000 audit: BPF prog-id=51 op=LOAD Jan 20 01:48:32.984000 audit: BPF prog-id=28 op=UNLOAD Jan 20 01:48:32.984000 audit: BPF prog-id=29 op=UNLOAD Jan 20 01:48:33.007000 audit: BPF prog-id=52 op=LOAD Jan 20 01:48:33.007000 audit: BPF prog-id=30 op=UNLOAD Jan 20 01:48:33.013000 audit: BPF prog-id=53 op=LOAD Jan 20 01:48:33.019000 audit: BPF prog-id=54 op=LOAD Jan 20 01:48:33.019000 audit: BPF prog-id=31 op=UNLOAD Jan 20 01:48:33.019000 audit: BPF prog-id=32 op=UNLOAD Jan 20 01:48:33.037000 audit: BPF prog-id=55 op=LOAD Jan 20 01:48:33.039000 audit: BPF prog-id=40 op=UNLOAD Jan 20 01:48:33.046000 audit: BPF prog-id=56 op=LOAD Jan 20 01:48:33.051000 audit: BPF prog-id=57 op=LOAD Jan 20 01:48:33.053000 audit: BPF prog-id=41 op=UNLOAD Jan 20 01:48:33.053000 audit: BPF prog-id=42 op=UNLOAD Jan 20 01:48:33.801396 systemd[1]: Finished ensure-sysext.service. Jan 20 01:48:33.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:33.850539 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:48:33.861111 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:48:33.898572 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:48:33.930489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:48:33.939415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:48:34.096719 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:48:34.182238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:48:34.219234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:48:34.233830 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:48:34.237223 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 01:48:34.273651 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:48:34.297411 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:48:34.335322 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 01:48:34.348630 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:48:34.393000 audit: BPF prog-id=58 op=LOAD Jan 20 01:48:34.415887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:48:34.434000 audit: BPF prog-id=59 op=LOAD Jan 20 01:48:34.440558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:48:34.516586 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:48:34.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.622546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:48:34.622724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:48:34.638606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:48:34.638992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:48:34.640735 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:48:34.641591 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:48:34.643238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:48:34.648445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:48:34.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.690508 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:48:34.793893 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:48:34.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.807694 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:48:34.883503 kernel: kauditd_printk_skb: 66 callbacks suppressed Jan 20 01:48:34.883725 kernel: audit: type=1130 audit(1768873714.837:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.969566 kernel: audit: type=1127 audit(1768873714.937:221): pid=1549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.937000 audit[1549]: SYSTEM_BOOT pid=1549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 01:48:34.945723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:48:34.945814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:48:35.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:35.015596 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:48:35.118269 kernel: audit: type=1130 audit(1768873715.044:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:48:35.263894 kernel: audit: type=1305 audit(1768873715.226:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 01:48:35.264760 kernel: audit: type=1300 audit(1768873715.226:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff598e0c70 a2=420 a3=0 items=0 ppid=1525 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:35.226000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 01:48:35.226000 audit[1571]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff598e0c70 a2=420 a3=0 items=0 ppid=1525 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:48:35.265421 augenrules[1571]: No rules Jan 20 01:48:35.226000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:48:35.317581 kernel: audit: type=1327 audit(1768873715.226:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:48:35.319637 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:48:35.320395 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:48:35.418501 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:48:35.564870 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:48:35.580883 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:48:36.310707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:48:36.710955 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:48:36.732937 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:48:36.780064 systemd-networkd[1543]: lo: Link UP Jan 20 01:48:36.780084 systemd-networkd[1543]: lo: Gained carrier Jan 20 01:48:36.785438 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:48:36.802086 systemd[1]: Reached target network.target - Network. Jan 20 01:48:36.812916 systemd-networkd[1543]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:48:36.812932 systemd-networkd[1543]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:48:36.815239 systemd-networkd[1543]: eth0: Link UP Jan 20 01:48:36.816062 systemd-networkd[1543]: eth0: Gained carrier Jan 20 01:48:36.816087 systemd-networkd[1543]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 01:48:36.825737 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 01:48:36.899988 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:48:37.835315 systemd-networkd[1543]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 20 01:48:37.857276 systemd-timesyncd[1548]: Network configuration changed, trying to establish connection. Jan 20 01:48:37.866226 systemd-timesyncd[1548]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 20 01:48:37.866688 systemd-timesyncd[1548]: Initial clock synchronization to Tue 2026-01-20 01:48:38.044637 UTC. Jan 20 01:48:37.912454 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 01:48:38.867240 systemd-networkd[1543]: eth0: Gained IPv6LL Jan 20 01:48:39.495674 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1404960309 wd_nsec: 1404959579 Jan 20 01:48:39.549574 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:48:39.567635 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:48:40.150003 kernel: kvm_amd: TSC scaling supported Jan 20 01:48:40.150591 kernel: kvm_amd: Nested Virtualization enabled Jan 20 01:48:40.150698 kernel: kvm_amd: Nested Paging enabled Jan 20 01:48:40.166280 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 20 01:48:40.166490 kernel: kvm_amd: PMU virtualization is disabled Jan 20 01:48:44.959461 ldconfig[1535]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:48:45.013059 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:48:45.034933 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:48:45.351590 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:48:45.385656 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:48:45.397620 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:48:45.412155 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:48:45.435197 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 01:48:45.452033 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:48:45.472395 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:48:45.514083 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 01:48:45.562220 kernel: EDAC MC: Ver: 3.0.0 Jan 20 01:48:45.536915 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 01:48:45.565908 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:48:45.600264 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:48:45.600533 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:48:45.608527 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:48:45.636522 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:48:45.662854 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:48:45.710236 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 01:48:45.737027 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 01:48:45.757768 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 01:48:45.833807 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:48:45.860916 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 01:48:45.898087 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:48:45.938049 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:48:45.953239 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:48:45.960307 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:48:45.961190 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:48:45.974476 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:48:46.003232 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 20 01:48:46.035645 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:48:46.081269 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:48:46.121964 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:48:46.176060 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:48:46.242808 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:48:46.343428 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 01:48:46.501931 jq[1599]: false Jan 20 01:48:46.603804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:48:46.730596 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:48:46.736174 extend-filesystems[1600]: Found /dev/vda6 Jan 20 01:48:46.759781 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:48:46.762775 extend-filesystems[1600]: Found /dev/vda9 Jan 20 01:48:46.770045 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing passwd entry cache Jan 20 01:48:46.769120 oslogin_cache_refresh[1601]: Refreshing passwd entry cache Jan 20 01:48:46.780252 extend-filesystems[1600]: Checking size of /dev/vda9 Jan 20 01:48:46.834918 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:48:46.863766 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting users, quitting Jan 20 01:48:46.863766 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:48:46.863766 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing group entry cache Jan 20 01:48:46.863550 oslogin_cache_refresh[1601]: Failure getting users, quitting Jan 20 01:48:46.863579 oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 01:48:46.863658 oslogin_cache_refresh[1601]: Refreshing group entry cache Jan 20 01:48:46.872762 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:48:46.913865 extend-filesystems[1600]: Resized partition /dev/vda9 Jan 20 01:48:47.049622 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Jan 20 01:48:46.940302 oslogin_cache_refresh[1601]: Failure getting groups, quitting Jan 20 01:48:46.956000 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:48:47.050314 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting groups, quitting Jan 20 01:48:47.050314 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:48:47.055987 extend-filesystems[1621]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 01:48:46.940329 oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 01:48:47.185209 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:48:47.204516 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:48:47.207199 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:48:47.209874 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:48:47.227739 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:48:47.318395 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:48:47.335091 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:48:47.335670 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:48:47.336131 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 01:48:47.362875 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 01:48:47.388480 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:48:47.388944 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:48:47.418300 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:48:47.418783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:48:47.558814 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:48:47.764105 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Jan 20 01:48:47.752268 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 20 01:48:47.850044 jq[1629]: true Jan 20 01:48:47.752839 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 20 01:48:47.850678 jq[1656]: true Jan 20 01:48:47.855427 extend-filesystems[1621]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:48:47.855427 extend-filesystems[1621]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 20 01:48:47.855427 extend-filesystems[1621]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Jan 20 01:48:47.919743 extend-filesystems[1600]: Resized filesystem in /dev/vda9 Jan 20 01:48:47.874284 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:48:47.929671 update_engine[1626]: I20260120 01:48:47.863982 1626 main.cc:92] Flatcar Update Engine starting Jan 20 01:48:47.874760 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:48:47.956809 tar[1640]: linux-amd64/LICENSE Jan 20 01:48:47.956809 tar[1640]: linux-amd64/helm Jan 20 01:48:47.953630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:48:48.569844 dbus-daemon[1597]: [system] SELinux support is enabled Jan 20 01:48:48.576648 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:48:48.597910 systemd-logind[1624]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:48:48.603207 systemd-logind[1624]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:48:48.604191 systemd-logind[1624]: New seat seat0. Jan 20 01:48:48.616196 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:48:48.744461 update_engine[1626]: I20260120 01:48:48.648924 1626 update_check_scheduler.cc:74] Next update check in 10m55s Jan 20 01:48:48.634833 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:48:48.634881 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:48:48.653944 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:48:48.654016 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:48:48.834755 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:48:49.282137 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:48:49.362165 sshd_keygen[1637]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:48:49.446479 bash[1682]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:48:49.457780 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:48:49.478011 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:48:50.477191 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:48:50.567225 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:48:50.735002 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:48:50.758660 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:34738.service - OpenSSH per-connection server daemon (10.0.0.1:34738). Jan 20 01:48:50.959741 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:48:50.967586 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:48:51.005853 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:48:51.054405 locksmithd[1683]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:48:51.121456 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:48:51.164135 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:48:51.205860 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:48:51.226160 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:48:51.562268 containerd[1641]: time="2026-01-20T01:48:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 01:48:51.571051 containerd[1641]: time="2026-01-20T01:48:51.564964150Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 01:48:51.571108 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 34738 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:48:51.579244 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:51.613080 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:48:51.656188 containerd[1641]: time="2026-01-20T01:48:51.652463257Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.505µs" Jan 20 01:48:51.656188 containerd[1641]: time="2026-01-20T01:48:51.652518023Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 01:48:51.656188 containerd[1641]: time="2026-01-20T01:48:51.652602511Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 01:48:51.656188 containerd[1641]: time="2026-01-20T01:48:51.652623643Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 01:48:51.659688 containerd[1641]: time="2026-01-20T01:48:51.656641284Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 01:48:51.659688 containerd[1641]: time="2026-01-20T01:48:51.656701692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.659893460Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.659917609Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.660262311Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.660283605Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.660299547Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.660311295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.660623771Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663874 containerd[1641]: time="2026-01-20T01:48:51.660648212Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 01:48:51.663192 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:48:51.800641 containerd[1641]: time="2026-01-20T01:48:51.800574257Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 01:48:51.801590 containerd[1641]: time="2026-01-20T01:48:51.801555306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:48:51.805308 containerd[1641]: time="2026-01-20T01:48:51.805224945Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 01:48:51.805969 containerd[1641]: time="2026-01-20T01:48:51.805943608Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 01:48:51.806307 containerd[1641]: time="2026-01-20T01:48:51.806278675Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 01:48:51.810279 containerd[1641]: time="2026-01-20T01:48:51.810194355Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 01:48:51.817910 containerd[1641]: time="2026-01-20T01:48:51.817661412Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:48:51.862454 systemd-logind[1624]: New session 1 of user core. Jan 20 01:48:51.921018 containerd[1641]: time="2026-01-20T01:48:51.916264218Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 01:48:51.927183 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:48:51.948783 containerd[1641]: time="2026-01-20T01:48:51.947510662Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 01:48:51.949209 containerd[1641]: time="2026-01-20T01:48:51.949128091Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 01:48:51.949209 containerd[1641]: time="2026-01-20T01:48:51.949167559Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 01:48:51.949209 containerd[1641]: time="2026-01-20T01:48:51.949198317Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 01:48:51.949307 containerd[1641]: time="2026-01-20T01:48:51.949231991Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 01:48:51.949307 containerd[1641]: time="2026-01-20T01:48:51.949253355Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 01:48:51.949307 containerd[1641]: time="2026-01-20T01:48:51.949297500Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949318341Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949414536Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949437056Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949452405Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949468367Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949485114Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949701877Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949730281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949753717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949767868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949795357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949810012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949831194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949877532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 01:48:51.961577 containerd[1641]: time="2026-01-20T01:48:51.949898614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 01:48:51.962480 containerd[1641]: time="2026-01-20T01:48:51.949913993Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 01:48:51.962480 containerd[1641]: time="2026-01-20T01:48:51.949927702Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 01:48:51.962480 containerd[1641]: time="2026-01-20T01:48:51.950030587Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 01:48:51.962480 containerd[1641]: time="2026-01-20T01:48:51.950152099Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 01:48:51.962480 containerd[1641]: time="2026-01-20T01:48:51.950200509Z" level=info msg="Start snapshots syncer" Jan 20 01:48:51.962480 containerd[1641]: time="2026-01-20T01:48:51.950278691Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 01:48:51.962665 containerd[1641]: time="2026-01-20T01:48:51.952987114Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 01:48:51.962665 containerd[1641]: time="2026-01-20T01:48:51.953057702Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953199793Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953438634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953468677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953486450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953502654Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953517701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953550430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953565868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953583420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953599604Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953646123Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953669306Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 01:48:51.963755 containerd[1641]: time="2026-01-20T01:48:51.953684967Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953698164Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953709670Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953726557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953774635Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953804147Z" level=info msg="runtime interface created" Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953815290Z" level=info msg="created NRI interface" Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953836483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953855060Z" level=info msg="Connect containerd service" Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.953887116Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:48:51.964142 containerd[1641]: time="2026-01-20T01:48:51.956698765Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:48:52.016899 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:48:52.436513 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:48:52.661283 systemd-logind[1624]: New session c1 of user core. Jan 20 01:48:52.746063 tar[1640]: linux-amd64/README.md Jan 20 01:48:52.914451 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085577117Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085670028Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085701492Z" level=info msg="Start subscribing containerd event" Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085733417Z" level=info msg="Start recovering state" Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085901164Z" level=info msg="Start event monitor" Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085921774Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085957779Z" level=info msg="Start streaming server" Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085969898Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085981977Z" level=info msg="runtime interface starting up..." Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.085989131Z" level=info msg="starting plugins..." Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.086007763Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 01:48:53.106378 containerd[1641]: time="2026-01-20T01:48:53.093903233Z" level=info msg="containerd successfully booted in 1.532337s" Jan 20 01:48:53.086587 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:48:53.421736 systemd[1722]: Queued start job for default target default.target. Jan 20 01:48:53.471441 systemd[1722]: Created slice app.slice - User Application Slice. Jan 20 01:48:53.471511 systemd[1722]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 01:48:53.471537 systemd[1722]: Reached target paths.target - Paths. Jan 20 01:48:53.471634 systemd[1722]: Reached target timers.target - Timers. Jan 20 01:48:53.482722 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:48:53.487697 systemd[1722]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 01:48:53.551025 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:48:53.552175 systemd[1722]: Reached target sockets.target - Sockets. Jan 20 01:48:53.562168 systemd[1722]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 01:48:53.562499 systemd[1722]: Reached target basic.target - Basic System. Jan 20 01:48:53.562931 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:48:53.563189 systemd[1722]: Reached target default.target - Main User Target. Jan 20 01:48:53.563258 systemd[1722]: Startup finished in 799ms. Jan 20 01:48:53.592888 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:48:53.760739 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:34784.service - OpenSSH per-connection server daemon (10.0.0.1:34784). Jan 20 01:48:53.962736 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:48:53.968331 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:54.000510 systemd-logind[1624]: New session 2 of user core. Jan 20 01:48:54.019131 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:48:54.112489 sshd[1754]: Connection closed by 10.0.0.1 port 34784 Jan 20 01:48:54.118620 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:54.157293 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:34784.service: Deactivated successfully. Jan 20 01:48:54.164193 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:48:54.173220 systemd-logind[1624]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:48:54.181611 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:34806.service - OpenSSH per-connection server daemon (10.0.0.1:34806). Jan 20 01:48:54.202549 systemd-logind[1624]: Removed session 2. Jan 20 01:48:54.364795 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 34806 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:48:54.369833 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:48:54.428698 systemd-logind[1624]: New session 3 of user core. Jan 20 01:48:54.456245 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:48:54.567462 sshd[1763]: Connection closed by 10.0.0.1 port 34806 Jan 20 01:48:54.572579 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 20 01:48:54.603600 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:34806.service: Deactivated successfully. Jan 20 01:48:54.628920 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:48:54.644572 systemd-logind[1624]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:48:54.658535 systemd-logind[1624]: Removed session 3. Jan 20 01:48:55.101739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:48:55.121723 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:48:55.144505 systemd[1]: Startup finished in 24.003s (kernel) + 37.948s (initrd) + 37.605s (userspace) = 1min 39.557s. Jan 20 01:48:55.168171 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:49:01.192021 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3170215501 wd_nsec: 3170213967 Jan 20 01:49:05.604437 kubelet[1772]: E0120 01:49:05.599961 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:49:05.611827 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:33950.service - OpenSSH per-connection server daemon (10.0.0.1:33950). Jan 20 01:49:05.620119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:49:05.621687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:49:05.625480 systemd[1]: kubelet.service: Consumed 9.798s CPU time, 259.2M memory peak. Jan 20 01:49:06.033805 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 33950 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:49:06.046906 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:06.100866 systemd-logind[1624]: New session 4 of user core. Jan 20 01:49:06.127998 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:49:07.511052 sshd[1790]: Connection closed by 10.0.0.1 port 33950 Jan 20 01:49:07.519686 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:07.635185 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:34044.service - OpenSSH per-connection server daemon (10.0.0.1:34044). Jan 20 01:49:07.636012 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:33950.service: Deactivated successfully. Jan 20 01:49:07.642978 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:49:07.651704 systemd[1]: session-4.scope: Consumed 1.236s CPU time, 2.8M memory peak. Jan 20 01:49:07.657819 systemd-logind[1624]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:49:07.660121 systemd-logind[1624]: Removed session 4. Jan 20 01:49:07.903259 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 34044 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:49:07.919294 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:07.976099 systemd-logind[1624]: New session 5 of user core. Jan 20 01:49:07.989834 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:49:08.082637 sshd[1799]: Connection closed by 10.0.0.1 port 34044 Jan 20 01:49:08.087654 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:08.118662 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:34044.service: Deactivated successfully. Jan 20 01:49:08.125287 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:49:08.130903 systemd-logind[1624]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:49:08.153799 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:34050.service - OpenSSH per-connection server daemon (10.0.0.1:34050). Jan 20 01:49:08.157179 systemd-logind[1624]: Removed session 5. Jan 20 01:49:08.354332 sshd[1805]: Accepted publickey for core from 10.0.0.1 port 34050 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:49:08.369089 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:08.434493 systemd-logind[1624]: New session 6 of user core. Jan 20 01:49:08.475502 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:49:08.593070 sshd[1808]: Connection closed by 10.0.0.1 port 34050 Jan 20 01:49:08.591620 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:08.634091 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:34050.service: Deactivated successfully. Jan 20 01:49:08.644851 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:49:08.656548 systemd-logind[1624]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:49:08.665296 systemd-logind[1624]: Removed session 6. Jan 20 01:49:08.675952 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:34056.service - OpenSSH per-connection server daemon (10.0.0.1:34056). Jan 20 01:49:08.894456 sshd[1814]: Accepted publickey for core from 10.0.0.1 port 34056 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:49:08.897749 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:08.953207 systemd-logind[1624]: New session 7 of user core. Jan 20 01:49:08.992716 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:49:09.126158 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:49:09.126742 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:49:09.195847 sudo[1818]: pam_unix(sudo:session): session closed for user root Jan 20 01:49:09.228300 sshd[1817]: Connection closed by 10.0.0.1 port 34056 Jan 20 01:49:09.218700 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:09.271128 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:34056.service: Deactivated successfully. Jan 20 01:49:09.279506 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:49:09.299642 systemd-logind[1624]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:49:09.313193 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:34064.service - OpenSSH per-connection server daemon (10.0.0.1:34064). Jan 20 01:49:09.320944 systemd-logind[1624]: Removed session 7. Jan 20 01:49:09.669296 sshd[1824]: Accepted publickey for core from 10.0.0.1 port 34064 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:49:09.673473 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:09.730606 systemd-logind[1624]: New session 8 of user core. Jan 20 01:49:09.758908 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:49:09.871520 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:49:09.871988 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:49:09.919225 sudo[1829]: pam_unix(sudo:session): session closed for user root Jan 20 01:49:09.976209 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 01:49:09.986261 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:49:10.713096 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 01:49:11.272000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 01:49:11.280236 augenrules[1851]: No rules Jan 20 01:49:11.287773 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:49:11.288237 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 01:49:11.303495 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 20 01:49:11.308088 kernel: audit: type=1305 audit(1768873751.272:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 01:49:11.308148 kernel: audit: type=1300 audit(1768873751.272:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd86d510a0 a2=420 a3=0 items=0 ppid=1832 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:11.272000 audit[1851]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd86d510a0 a2=420 a3=0 items=0 ppid=1832 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:11.322184 sshd[1827]: Connection closed by 10.0.0.1 port 34064 Jan 20 01:49:11.324103 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Jan 20 01:49:11.272000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:49:11.394943 kernel: audit: type=1327 audit(1768873751.272:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 01:49:11.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.486963 kernel: audit: type=1130 audit(1768873751.286:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.497914 kernel: audit: type=1131 audit(1768873751.286:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.507583 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:34064.service: Deactivated successfully. Jan 20 01:49:11.302000 audit[1828]: USER_END pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.530038 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:49:11.539105 systemd-logind[1624]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:49:11.578590 kernel: audit: type=1106 audit(1768873751.302:227): pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.302000 audit[1828]: CRED_DISP pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.581976 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:34082.service - OpenSSH per-connection server daemon (10.0.0.1:34082). Jan 20 01:49:11.588330 systemd-logind[1624]: Removed session 8. Jan 20 01:49:11.630605 kernel: audit: type=1104 audit(1768873751.302:228): pid=1828 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.630803 kernel: audit: type=1106 audit(1768873751.333:229): pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:11.333000 audit[1824]: USER_END pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:11.684922 kernel: audit: type=1104 audit(1768873751.333:230): pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:11.333000 audit[1824]: CRED_DISP pid=1824 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:11.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.48:22-10.0.0.1:34064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.761327 kernel: audit: type=1131 audit(1768873751.509:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.48:22-10.0.0.1:34064 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.48:22-10.0.0.1:34082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:11.935000 audit[1860]: USER_ACCT pid=1860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:11.939209 sshd[1860]: Accepted publickey for core from 10.0.0.1 port 34082 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 01:49:11.965270 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:49:11.956000 audit[1860]: CRED_ACQ pid=1860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:11.956000 audit[1860]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcbd0bc370 a2=3 a3=0 items=0 ppid=1 pid=1860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:11.956000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 01:49:12.015036 systemd-logind[1624]: New session 9 of user core. Jan 20 01:49:12.032780 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:49:12.056000 audit[1860]: USER_START pid=1860 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:12.069000 audit[1863]: CRED_ACQ pid=1863 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:49:12.148449 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:49:12.143000 audit[1864]: USER_ACCT pid=1864 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:49:12.143000 audit[1864]: CRED_REFR pid=1864 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:49:12.150527 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:49:12.169000 audit[1864]: USER_START pid=1864 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:49:15.775241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:49:15.793131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:49:21.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:21.339557 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 20 01:49:21.339623 kernel: audit: type=1130 audit(1768873761.321:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:21.322746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:49:21.405064 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:49:21.445961 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:49:21.493033 (dockerd)[1896]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:49:22.734516 kubelet[1894]: E0120 01:49:22.729648 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:49:22.773852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:49:22.774183 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:49:22.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:49:22.798141 systemd[1]: kubelet.service: Consumed 1.655s CPU time, 110.5M memory peak. Jan 20 01:49:22.860565 kernel: audit: type=1131 audit(1768873762.795:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:49:29.234941 dockerd[1896]: time="2026-01-20T01:49:29.234541959Z" level=info msg="Starting up" Jan 20 01:49:29.260549 dockerd[1896]: time="2026-01-20T01:49:29.256307960Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 01:49:29.761970 dockerd[1896]: time="2026-01-20T01:49:29.761182365Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 01:49:30.384100 dockerd[1896]: time="2026-01-20T01:49:30.381511112Z" level=info msg="Loading containers: start." Jan 20 01:49:30.493064 kernel: Initializing XFRM netlink socket Jan 20 01:49:30.995000 audit[1957]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:30.995000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff64571830 a2=0 a3=0 items=0 ppid=1896 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.094482 kernel: audit: type=1325 audit(1768873770.995:243): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.094612 kernel: audit: type=1300 audit(1768873770.995:243): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff64571830 a2=0 a3=0 items=0 ppid=1896 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.094665 kernel: audit: type=1327 audit(1768873770.995:243): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:49:30.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:49:31.124130 kernel: audit: type=1325 audit(1768873771.027:244): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.027000 audit[1959]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.027000 audit[1959]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc48c41f00 a2=0 a3=0 items=0 ppid=1896 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.175504 kernel: audit: type=1300 audit(1768873771.027:244): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc48c41f00 a2=0 a3=0 items=0 ppid=1896 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.027000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:49:31.180768 kernel: audit: type=1327 audit(1768873771.027:244): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:49:31.190558 kernel: audit: type=1325 audit(1768873771.050:245): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.050000 audit[1961]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.205105 kernel: audit: type=1300 audit(1768873771.050:245): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4465d9c0 a2=0 a3=0 items=0 ppid=1896 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.050000 audit[1961]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4465d9c0 a2=0 a3=0 items=0 ppid=1896 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:49:31.293199 kernel: audit: type=1327 audit(1768873771.050:245): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:49:31.293402 kernel: audit: type=1325 audit(1768873771.062:246): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.062000 audit[1963]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.062000 audit[1963]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4882d2d0 a2=0 a3=0 items=0 ppid=1896 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 01:49:31.088000 audit[1965]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.088000 audit[1965]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffecd8ba1e0 a2=0 a3=0 items=0 ppid=1896 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 01:49:31.109000 audit[1967]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.109000 audit[1967]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc2cba4980 a2=0 a3=0 items=0 ppid=1896 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:49:31.131000 audit[1969]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.131000 audit[1969]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc867770a0 a2=0 a3=0 items=0 ppid=1896 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.131000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:49:31.151000 audit[1971]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.151000 audit[1971]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe50cb1040 a2=0 a3=0 items=0 ppid=1896 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 01:49:31.442000 audit[1974]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.442000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdecaffda0 a2=0 a3=0 items=0 ppid=1896 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 01:49:31.474000 audit[1976]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.474000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffe69e0440 a2=0 a3=0 items=0 ppid=1896 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 01:49:31.498000 audit[1978]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.498000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff6ae19590 a2=0 a3=0 items=0 ppid=1896 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.498000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 01:49:31.544000 audit[1980]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.544000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffa4f4aa10 a2=0 a3=0 items=0 ppid=1896 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.544000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:49:31.561000 audit[1982]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:31.561000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcb0042a70 a2=0 a3=0 items=0 ppid=1896 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 01:49:31.879000 audit[2012]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:31.879000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffce147f360 a2=0 a3=0 items=0 ppid=1896 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 01:49:31.911000 audit[2014]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:31.911000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc5f745d40 a2=0 a3=0 items=0 ppid=1896 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.911000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 01:49:31.936000 audit[2016]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:31.936000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc662f49d0 a2=0 a3=0 items=0 ppid=1896 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.936000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 01:49:31.955000 audit[2018]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:31.955000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb75406c0 a2=0 a3=0 items=0 ppid=1896 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 01:49:31.965000 audit[2020]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:31.965000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff50478ba0 a2=0 a3=0 items=0 ppid=1896 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 01:49:31.977000 audit[2022]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:31.977000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe1951dd40 a2=0 a3=0 items=0 ppid=1896 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:31.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:49:32.013000 audit[2024]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.013000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe6a7f17a0 a2=0 a3=0 items=0 ppid=1896 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.013000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:49:32.036000 audit[2026]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.036000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc8f8b0370 a2=0 a3=0 items=0 ppid=1896 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 01:49:32.058000 audit[2028]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.058000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc71fa8bc0 a2=0 a3=0 items=0 ppid=1896 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 01:49:32.062000 audit[2030]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.062000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd12b319b0 a2=0 a3=0 items=0 ppid=1896 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.062000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 01:49:32.075000 audit[2032]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.075000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffdf2f0ced0 a2=0 a3=0 items=0 ppid=1896 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.075000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 01:49:32.090000 audit[2034]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.090000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffa09c7510 a2=0 a3=0 items=0 ppid=1896 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.090000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 01:49:32.102000 audit[2036]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.102000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff2a98f2a0 a2=0 a3=0 items=0 ppid=1896 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 01:49:32.127000 audit[2041]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.127000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8d4dbaa0 a2=0 a3=0 items=0 ppid=1896 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.127000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 01:49:32.170000 audit[2043]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.170000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe274a27a0 a2=0 a3=0 items=0 ppid=1896 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 01:49:32.182000 audit[2045]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.182000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe2199ad00 a2=0 a3=0 items=0 ppid=1896 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 01:49:32.199000 audit[2047]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.199000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe329c7620 a2=0 a3=0 items=0 ppid=1896 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.199000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 01:49:32.214000 audit[2049]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.214000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe997bd190 a2=0 a3=0 items=0 ppid=1896 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.214000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 01:49:32.240000 audit[2051]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:49:32.240000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa114ed40 a2=0 a3=0 items=0 ppid=1896 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 01:49:32.378000 audit[2056]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.378000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffdab4e2910 a2=0 a3=0 items=0 ppid=1896 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 01:49:32.390000 audit[2058]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.390000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffefd4b5e90 a2=0 a3=0 items=0 ppid=1896 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.390000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 01:49:32.442000 audit[2066]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.442000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffec5018300 a2=0 a3=0 items=0 ppid=1896 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 01:49:32.517000 audit[2072]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.517000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe0c802520 a2=0 a3=0 items=0 ppid=1896 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.517000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 01:49:32.540000 audit[2074]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.540000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffca27c41f0 a2=0 a3=0 items=0 ppid=1896 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.540000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 01:49:32.555000 audit[2076]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.555000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe1eaefa20 a2=0 a3=0 items=0 ppid=1896 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 01:49:32.575000 audit[2078]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.575000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcb0252460 a2=0 a3=0 items=0 ppid=1896 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 01:49:32.590000 audit[2080]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:49:32.590000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffea41092c0 a2=0 a3=0 items=0 ppid=1896 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:49:32.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 01:49:32.598176 systemd-networkd[1543]: docker0: Link UP Jan 20 01:49:32.624103 dockerd[1896]: time="2026-01-20T01:49:32.623303118Z" level=info msg="Loading containers: done." Jan 20 01:49:32.743030 dockerd[1896]: time="2026-01-20T01:49:32.741591507Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:49:32.743030 dockerd[1896]: time="2026-01-20T01:49:32.741826380Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 01:49:32.743030 dockerd[1896]: time="2026-01-20T01:49:32.742160464Z" level=info msg="Initializing buildkit" Jan 20 01:49:32.997113 dockerd[1896]: time="2026-01-20T01:49:32.995266554Z" level=info msg="Completed buildkit initialization" Jan 20 01:49:33.015865 dockerd[1896]: time="2026-01-20T01:49:33.014100224Z" level=info msg="Daemon has completed initialization" Jan 20 01:49:33.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:33.029619 dockerd[1896]: time="2026-01-20T01:49:33.016087708Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:49:33.015404 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:49:33.032069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:49:33.053918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:49:33.600445 update_engine[1626]: I20260120 01:49:33.600051 1626 update_attempter.cc:509] Updating boot flags... Jan 20 01:49:35.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:35.563233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:49:35.615963 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:49:37.081468 kubelet[2142]: E0120 01:49:37.080859 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:49:37.089827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:49:37.090141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:49:37.093869 systemd[1]: kubelet.service: Consumed 1.459s CPU time, 109.8M memory peak. Jan 20 01:49:37.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:49:37.115085 kernel: kauditd_printk_skb: 112 callbacks suppressed Jan 20 01:49:37.115892 kernel: audit: type=1131 audit(1768873777.093:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:49:41.969057 containerd[1641]: time="2026-01-20T01:49:41.966868286Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 20 01:49:46.762861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914757653.mount: Deactivated successfully. Jan 20 01:49:47.286458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:49:47.368976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:49:51.224529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:49:51.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:51.271791 kernel: audit: type=1130 audit(1768873791.221:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:49:51.283996 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:49:52.338908 kubelet[2193]: E0120 01:49:52.338551 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:49:52.371429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:49:52.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:49:52.374753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:49:52.376522 systemd[1]: kubelet.service: Consumed 1.694s CPU time, 109.3M memory peak. Jan 20 01:49:52.405812 kernel: audit: type=1131 audit(1768873792.374:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:02.536142 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:50:02.571322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:50:06.647304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:50:06.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:06.682406 kernel: audit: type=1130 audit(1768873806.644:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:06.696486 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:50:07.247052 containerd[1641]: time="2026-01-20T01:50:07.239255399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:07.340884 containerd[1641]: time="2026-01-20T01:50:07.265548837Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27056784" Jan 20 01:50:07.625190 containerd[1641]: time="2026-01-20T01:50:07.493985101Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:08.078697 containerd[1641]: time="2026-01-20T01:50:08.071225157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:08.095564 containerd[1641]: time="2026-01-20T01:50:08.092182381Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 26.125118051s" Jan 20 01:50:08.095564 containerd[1641]: time="2026-01-20T01:50:08.092276569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 20 01:50:08.096412 kubelet[2238]: E0120 01:50:08.096186 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:50:08.124730 containerd[1641]: time="2026-01-20T01:50:08.123253833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 20 01:50:08.128244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:50:08.129186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:50:08.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:08.170147 systemd[1]: kubelet.service: Consumed 2.878s CPU time, 109M memory peak. Jan 20 01:50:08.262197 kernel: audit: type=1131 audit(1768873808.169:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:18.808790 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 01:50:19.110241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:50:23.090741 containerd[1641]: time="2026-01-20T01:50:23.090638854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:23.099624 containerd[1641]: time="2026-01-20T01:50:23.095455701Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21154285" Jan 20 01:50:23.103805 containerd[1641]: time="2026-01-20T01:50:23.101974796Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:23.115234 containerd[1641]: time="2026-01-20T01:50:23.112395245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:23.135791 containerd[1641]: time="2026-01-20T01:50:23.128800204Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 15.005491588s" Jan 20 01:50:23.135791 containerd[1641]: time="2026-01-20T01:50:23.133808324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 20 01:50:23.173150 containerd[1641]: time="2026-01-20T01:50:23.171771093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 20 01:50:23.343224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:50:23.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:23.501506 kernel: audit: type=1130 audit(1768873823.452:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:23.841879 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:50:24.132022 kubelet[2260]: E0120 01:50:24.131721 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:50:24.153593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:50:24.157547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:50:24.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:24.163861 systemd[1]: kubelet.service: Consumed 2.687s CPU time, 111.1M memory peak. Jan 20 01:50:24.191779 kernel: audit: type=1131 audit(1768873824.163:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:32.454523 containerd[1641]: time="2026-01-20T01:50:32.453498048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:32.477168 containerd[1641]: time="2026-01-20T01:50:32.476905526Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15717792" Jan 20 01:50:32.490153 containerd[1641]: time="2026-01-20T01:50:32.486113990Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:32.604578 containerd[1641]: time="2026-01-20T01:50:32.601326493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:33.115286 containerd[1641]: time="2026-01-20T01:50:33.096684302Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 9.924845563s" Jan 20 01:50:33.115286 containerd[1641]: time="2026-01-20T01:50:33.105582707Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 20 01:50:33.226855 containerd[1641]: time="2026-01-20T01:50:33.224173844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 20 01:50:34.287923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 20 01:50:34.311760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:50:36.498565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:50:36.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:36.533173 kernel: audit: type=1130 audit(1768873836.496:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:36.550414 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:50:38.437460 kubelet[2284]: E0120 01:50:38.437166 2284 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:50:38.456990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:50:38.457930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:50:38.459854 systemd[1]: kubelet.service: Consumed 2.624s CPU time, 110.9M memory peak. Jan 20 01:50:38.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:38.490625 kernel: audit: type=1131 audit(1768873838.458:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:40.101458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50893959.mount: Deactivated successfully. Jan 20 01:50:43.889989 containerd[1641]: time="2026-01-20T01:50:43.887990207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:43.896868 containerd[1641]: time="2026-01-20T01:50:43.896626637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25962213" Jan 20 01:50:43.901019 containerd[1641]: time="2026-01-20T01:50:43.899436348Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:43.915048 containerd[1641]: time="2026-01-20T01:50:43.909751941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:43.915048 containerd[1641]: time="2026-01-20T01:50:43.911549885Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 10.68727296s" Jan 20 01:50:43.915048 containerd[1641]: time="2026-01-20T01:50:43.914460975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 20 01:50:43.932700 containerd[1641]: time="2026-01-20T01:50:43.932058270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 20 01:50:45.677478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1966641979.mount: Deactivated successfully. Jan 20 01:50:48.768877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 20 01:50:48.808309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:50:52.712522 kernel: audit: type=1130 audit(1768873852.697:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:52.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:50:52.698464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:50:52.827065 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:50:54.697417 kubelet[2357]: E0120 01:50:54.694615 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:50:54.719332 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:50:54.719656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:50:54.725899 systemd[1]: kubelet.service: Consumed 2.115s CPU time, 110.7M memory peak. Jan 20 01:50:54.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:54.783706 kernel: audit: type=1131 audit(1768873854.724:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:50:58.592210 containerd[1641]: time="2026-01-20T01:50:58.590416433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:58.605048 containerd[1641]: time="2026-01-20T01:50:58.604990138Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22379908" Jan 20 01:50:58.611647 containerd[1641]: time="2026-01-20T01:50:58.608980596Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:58.636843 containerd[1641]: time="2026-01-20T01:50:58.635222867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:58.652900 containerd[1641]: time="2026-01-20T01:50:58.640884019Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 14.708751701s" Jan 20 01:50:58.652900 containerd[1641]: time="2026-01-20T01:50:58.640930977Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 20 01:50:58.657419 containerd[1641]: time="2026-01-20T01:50:58.657300130Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 20 01:50:59.599791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317348712.mount: Deactivated successfully. Jan 20 01:50:59.640449 containerd[1641]: time="2026-01-20T01:50:59.639184234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:59.653942 containerd[1641]: time="2026-01-20T01:50:59.653734198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 20 01:50:59.665522 containerd[1641]: time="2026-01-20T01:50:59.664008238Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:59.685268 containerd[1641]: time="2026-01-20T01:50:59.677083622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:50:59.685268 containerd[1641]: time="2026-01-20T01:50:59.678422278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.020993106s" Jan 20 01:50:59.685268 containerd[1641]: time="2026-01-20T01:50:59.685130223Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 20 01:50:59.686583 containerd[1641]: time="2026-01-20T01:50:59.686289704Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 20 01:51:01.163433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763846841.mount: Deactivated successfully. Jan 20 01:51:05.021398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 20 01:51:05.130738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:51:08.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:08.959148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:51:08.978242 kernel: audit: type=1130 audit(1768873868.958:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:09.030044 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:51:09.242054 kubelet[2430]: E0120 01:51:09.241783 2430 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:51:09.252850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:51:09.259305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:51:09.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:51:09.263857 systemd[1]: kubelet.service: Consumed 1.837s CPU time, 112.8M memory peak. Jan 20 01:51:09.285399 kernel: audit: type=1131 audit(1768873869.258:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:51:19.835763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 20 01:51:20.785566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:51:23.828600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:51:23.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:23.907490 kernel: audit: type=1130 audit(1768873883.829:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:23.914308 (kubelet)[2446]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:51:25.176423 kubelet[2446]: E0120 01:51:25.171152 2446 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:51:25.208007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:51:25.208324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:51:25.213298 systemd[1]: kubelet.service: Consumed 1.095s CPU time, 110.6M memory peak. Jan 20 01:51:25.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:51:25.267962 kernel: audit: type=1131 audit(1768873885.212:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:51:35.319261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 20 01:51:35.394326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:51:39.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:39.334788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:51:39.430050 kernel: audit: type=1130 audit(1768873899.331:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:39.576544 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:51:40.479393 kubelet[2464]: E0120 01:51:40.475477 2464 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:51:40.514420 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:51:40.526389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:51:40.693266 kernel: audit: type=1131 audit(1768873900.602:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:51:40.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:51:40.603622 systemd[1]: kubelet.service: Consumed 2.139s CPU time, 110.4M memory peak. Jan 20 01:51:46.748079 containerd[1641]: time="2026-01-20T01:51:46.742324721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:51:46.964864 containerd[1641]: time="2026-01-20T01:51:46.892242281Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74159920" Jan 20 01:51:47.068935 containerd[1641]: time="2026-01-20T01:51:47.022290933Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:51:47.818182 containerd[1641]: time="2026-01-20T01:51:47.815082901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:51:47.896587 containerd[1641]: time="2026-01-20T01:51:47.892545036Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 48.205843569s" Jan 20 01:51:47.896587 containerd[1641]: time="2026-01-20T01:51:47.895523183Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 20 01:51:50.547883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 20 01:51:51.538263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:51:54.217412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:51:54.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:54.277319 kernel: audit: type=1130 audit(1768873914.212:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:51:54.317748 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:51:55.071950 kubelet[2505]: E0120 01:51:55.071476 2505 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:51:55.099654 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:51:55.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:51:55.101604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:51:55.102314 systemd[1]: kubelet.service: Consumed 959ms CPU time, 109.8M memory peak. Jan 20 01:51:55.128159 kernel: audit: type=1131 audit(1768873915.099:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:52:05.294452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 20 01:52:05.303753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:52:06.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:06.963252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:52:07.037158 kernel: audit: type=1130 audit(1768873926.956:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:07.091165 (kubelet)[2526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:52:09.033398 kubelet[2526]: E0120 01:52:09.028006 2526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:52:09.088991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:52:09.089835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:52:09.091436 systemd[1]: kubelet.service: Consumed 1.353s CPU time, 110M memory peak. Jan 20 01:52:09.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:52:09.171395 kernel: audit: type=1131 audit(1768873929.083:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:52:19.281510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 20 01:52:19.307048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:52:20.818645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:52:20.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:20.868548 kernel: audit: type=1130 audit(1768873940.818:306): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:20.867930 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:52:21.296380 kubelet[2542]: E0120 01:52:21.293655 2542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:52:21.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:52:21.320246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:52:21.320626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:52:21.321703 systemd[1]: kubelet.service: Consumed 490ms CPU time, 109.9M memory peak. Jan 20 01:52:21.358661 kernel: audit: type=1131 audit(1768873941.317:307): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:52:21.673568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:52:21.678402 systemd[1]: kubelet.service: Consumed 490ms CPU time, 109.9M memory peak. Jan 20 01:52:21.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:21.691332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:52:21.729539 kernel: audit: type=1130 audit(1768873941.677:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:21.729764 kernel: audit: type=1131 audit(1768873941.677:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:21.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:21.880815 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-9.scope)... Jan 20 01:52:21.893401 systemd[1]: Reloading... Jan 20 01:52:22.698401 zram_generator::config[2604]: No configuration found. Jan 20 01:52:25.221434 systemd[1]: Reloading finished in 3327 ms. Jan 20 01:52:25.312588 kernel: audit: type=1334 audit(1768873945.302:310): prog-id=63 op=LOAD Jan 20 01:52:25.312726 kernel: audit: type=1334 audit(1768873945.302:311): prog-id=44 op=UNLOAD Jan 20 01:52:25.302000 audit: BPF prog-id=63 op=LOAD Jan 20 01:52:25.302000 audit: BPF prog-id=44 op=UNLOAD Jan 20 01:52:25.302000 audit: BPF prog-id=64 op=LOAD Jan 20 01:52:25.302000 audit: BPF prog-id=65 op=LOAD Jan 20 01:52:25.350830 kernel: audit: type=1334 audit(1768873945.302:312): prog-id=64 op=LOAD Jan 20 01:52:25.351299 kernel: audit: type=1334 audit(1768873945.302:313): prog-id=65 op=LOAD Jan 20 01:52:25.369455 kernel: audit: type=1334 audit(1768873945.302:314): prog-id=45 op=UNLOAD Jan 20 01:52:25.369577 kernel: audit: type=1334 audit(1768873945.302:315): prog-id=46 op=UNLOAD Jan 20 01:52:25.302000 audit: BPF prog-id=45 op=UNLOAD Jan 20 01:52:25.302000 audit: BPF prog-id=46 op=UNLOAD Jan 20 01:52:25.305000 audit: BPF prog-id=66 op=LOAD Jan 20 01:52:25.305000 audit: BPF prog-id=55 op=UNLOAD Jan 20 01:52:25.305000 audit: BPF prog-id=67 op=LOAD Jan 20 01:52:25.305000 audit: BPF prog-id=68 op=LOAD Jan 20 01:52:25.305000 audit: BPF prog-id=56 op=UNLOAD Jan 20 01:52:25.305000 audit: BPF prog-id=57 op=UNLOAD Jan 20 01:52:25.308000 audit: BPF prog-id=69 op=LOAD Jan 20 01:52:25.308000 audit: BPF prog-id=58 op=UNLOAD Jan 20 01:52:25.308000 audit: BPF prog-id=70 op=LOAD Jan 20 01:52:25.308000 audit: BPF prog-id=71 op=LOAD Jan 20 01:52:25.309000 audit: BPF prog-id=50 op=UNLOAD Jan 20 01:52:25.309000 audit: BPF prog-id=51 op=UNLOAD Jan 20 01:52:25.314000 audit: BPF prog-id=72 op=LOAD Jan 20 01:52:25.314000 audit: BPF prog-id=60 op=UNLOAD Jan 20 01:52:25.314000 audit: BPF prog-id=73 op=LOAD Jan 20 01:52:25.314000 audit: BPF prog-id=74 op=LOAD Jan 20 01:52:25.314000 audit: BPF prog-id=61 op=UNLOAD Jan 20 01:52:25.314000 audit: BPF prog-id=62 op=UNLOAD Jan 20 01:52:25.328000 audit: BPF prog-id=75 op=LOAD Jan 20 01:52:25.328000 audit: BPF prog-id=47 op=UNLOAD Jan 20 01:52:25.328000 audit: BPF prog-id=76 op=LOAD Jan 20 01:52:25.333000 audit: BPF prog-id=77 op=LOAD Jan 20 01:52:25.333000 audit: BPF prog-id=48 op=UNLOAD Jan 20 01:52:25.333000 audit: BPF prog-id=49 op=UNLOAD Jan 20 01:52:25.334000 audit: BPF prog-id=78 op=LOAD Jan 20 01:52:25.334000 audit: BPF prog-id=59 op=UNLOAD Jan 20 01:52:25.341000 audit: BPF prog-id=79 op=LOAD Jan 20 01:52:25.341000 audit: BPF prog-id=52 op=UNLOAD Jan 20 01:52:25.341000 audit: BPF prog-id=80 op=LOAD Jan 20 01:52:25.341000 audit: BPF prog-id=81 op=LOAD Jan 20 01:52:25.341000 audit: BPF prog-id=53 op=UNLOAD Jan 20 01:52:25.341000 audit: BPF prog-id=54 op=UNLOAD Jan 20 01:52:25.357000 audit: BPF prog-id=82 op=LOAD Jan 20 01:52:25.357000 audit: BPF prog-id=43 op=UNLOAD Jan 20 01:52:25.503838 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 01:52:25.504089 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 01:52:25.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 01:52:25.505284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:52:25.505462 systemd[1]: kubelet.service: Consumed 470ms CPU time, 98.5M memory peak. Jan 20 01:52:25.521740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:52:27.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:27.075419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:52:27.095675 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 20 01:52:27.095809 kernel: audit: type=1130 audit(1768873947.073:351): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:52:27.166753 (kubelet)[2652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:52:27.521318 kubelet[2652]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:52:27.521318 kubelet[2652]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:52:27.521318 kubelet[2652]: I0120 01:52:27.519535 2652 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:52:29.790822 kubelet[2652]: I0120 01:52:29.788845 2652 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:52:29.790822 kubelet[2652]: I0120 01:52:29.789285 2652 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:52:29.797731 kubelet[2652]: I0120 01:52:29.793228 2652 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:52:29.797731 kubelet[2652]: I0120 01:52:29.793262 2652 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:52:29.797731 kubelet[2652]: I0120 01:52:29.794205 2652 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:52:30.147867 kubelet[2652]: I0120 01:52:30.137648 2652 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:52:30.162467 kubelet[2652]: E0120 01:52:30.157112 2652 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:52:30.200405 kubelet[2652]: I0120 01:52:30.197777 2652 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:52:30.237073 kubelet[2652]: I0120 01:52:30.231472 2652 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:52:30.241107 kubelet[2652]: I0120 01:52:30.239191 2652 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:52:30.241107 kubelet[2652]: I0120 01:52:30.239510 2652 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:52:30.241107 kubelet[2652]: I0120 01:52:30.240013 2652 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:52:30.241107 kubelet[2652]: I0120 01:52:30.240033 2652 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:52:30.241825 kubelet[2652]: I0120 01:52:30.240574 2652 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:52:30.258622 kubelet[2652]: I0120 01:52:30.257300 2652 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:52:30.263374 kubelet[2652]: I0120 01:52:30.261566 2652 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:52:30.263374 kubelet[2652]: I0120 01:52:30.262078 2652 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:52:30.264712 kubelet[2652]: I0120 01:52:30.264212 2652 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:52:30.265845 kubelet[2652]: I0120 01:52:30.265559 2652 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:52:30.271891 kubelet[2652]: E0120 01:52:30.271197 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:52:30.275485 kubelet[2652]: E0120 01:52:30.272788 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:52:30.303112 kubelet[2652]: I0120 01:52:30.300224 2652 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 01:52:30.303112 kubelet[2652]: I0120 01:52:30.302084 2652 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:52:30.303112 kubelet[2652]: I0120 01:52:30.302133 2652 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:52:30.303112 kubelet[2652]: W0120 01:52:30.302243 2652 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:52:30.327035 kubelet[2652]: I0120 01:52:30.323830 2652 server.go:1262] "Started kubelet" Jan 20 01:52:30.327035 kubelet[2652]: I0120 01:52:30.326033 2652 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:52:30.376833 kubelet[2652]: I0120 01:52:30.371156 2652 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:52:30.376833 kubelet[2652]: E0120 01:52:30.371697 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:30.376833 kubelet[2652]: I0120 01:52:30.372143 2652 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:52:30.376833 kubelet[2652]: I0120 01:52:30.372606 2652 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:52:30.376833 kubelet[2652]: E0120 01:52:30.373207 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:52:30.376833 kubelet[2652]: I0120 01:52:30.375799 2652 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:52:30.388533 kubelet[2652]: I0120 01:52:30.383700 2652 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:52:30.388533 kubelet[2652]: I0120 01:52:30.383774 2652 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:52:30.388533 kubelet[2652]: I0120 01:52:30.384227 2652 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:52:30.390658 kubelet[2652]: I0120 01:52:30.384417 2652 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:52:30.390658 kubelet[2652]: E0120 01:52:30.373308 2652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Jan 20 01:52:30.405968 kubelet[2652]: I0120 01:52:30.400555 2652 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:52:30.414872 kubelet[2652]: I0120 01:52:30.411757 2652 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:52:30.414872 kubelet[2652]: I0120 01:52:30.411908 2652 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:52:30.460000 audit[2669]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.478858 kubelet[2652]: E0120 01:52:30.412881 2652 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188c4d7c657da040 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:52:30.3237776 +0000 UTC m=+3.123748774,LastTimestamp:2026-01-20 01:52:30.3237776 +0000 UTC m=+3.123748774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:52:30.478858 kubelet[2652]: E0120 01:52:30.462893 2652 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:52:30.478858 kubelet[2652]: E0120 01:52:30.474996 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:30.499406 kernel: audit: type=1325 audit(1768873950.460:352): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.504829 kernel: audit: type=1300 audit(1768873950.460:352): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd81299cc0 a2=0 a3=0 items=0 ppid=2652 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.460000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd81299cc0 a2=0 a3=0 items=0 ppid=2652 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.505168 kubelet[2652]: I0120 01:52:30.501100 2652 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:52:30.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:52:30.466000 audit[2670]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.587564 kubelet[2652]: E0120 01:52:30.587517 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:30.628610 kernel: audit: type=1327 audit(1768873950.460:352): proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:52:30.633756 kernel: audit: type=1325 audit(1768873950.466:353): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.466000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceb236050 a2=0 a3=0 items=0 ppid=2652 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.680228 kubelet[2652]: E0120 01:52:30.669423 2652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Jan 20 01:52:30.697801 kubelet[2652]: E0120 01:52:30.688182 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:30.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:52:30.754433 kernel: audit: type=1300 audit(1768873950.466:353): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceb236050 a2=0 a3=0 items=0 ppid=2652 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.754625 kernel: audit: type=1327 audit(1768873950.466:353): proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:52:30.754675 kernel: audit: type=1325 audit(1768873950.489:354): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.489000 audit[2672]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.762512 kubelet[2652]: I0120 01:52:30.762194 2652 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:52:30.762512 kubelet[2652]: I0120 01:52:30.762220 2652 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:52:30.762512 kubelet[2652]: I0120 01:52:30.762245 2652 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:52:30.778877 kubelet[2652]: I0120 01:52:30.777529 2652 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:52:30.789202 kernel: audit: type=1300 audit(1768873950.489:354): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe35a62180 a2=0 a3=0 items=0 ppid=2652 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.489000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe35a62180 a2=0 a3=0 items=0 ppid=2652 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.790595 kubelet[2652]: E0120 01:52:30.788316 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:30.790595 kubelet[2652]: I0120 01:52:30.788430 2652 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:52:30.790595 kubelet[2652]: I0120 01:52:30.788479 2652 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:52:30.790595 kubelet[2652]: I0120 01:52:30.788647 2652 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:52:30.790595 kubelet[2652]: E0120 01:52:30.788718 2652 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:52:30.792542 kubelet[2652]: E0120 01:52:30.791394 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:52:30.809080 kubelet[2652]: I0120 01:52:30.807275 2652 policy_none.go:49] "None policy: Start" Jan 20 01:52:30.809080 kubelet[2652]: I0120 01:52:30.807307 2652 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:52:30.809080 kubelet[2652]: I0120 01:52:30.807325 2652 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:52:30.829516 kubelet[2652]: I0120 01:52:30.828186 2652 policy_none.go:47] "Start" Jan 20 01:52:30.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:52:30.867027 kernel: audit: type=1327 audit(1768873950.489:354): proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:52:30.532000 audit[2676]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.532000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff22e48830 a2=0 a3=0 items=0 ppid=2652 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.532000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:52:30.773000 audit[2683]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.773000 audit[2683]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffca6fce2b0 a2=0 a3=0 items=0 ppid=2652 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 20 01:52:30.782000 audit[2684]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:52:30.782000 audit[2684]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe52330180 a2=0 a3=0 items=0 ppid=2652 pid=2684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 01:52:30.795000 audit[2685]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.795000 audit[2685]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3b2972f0 a2=0 a3=0 items=0 ppid=2652 pid=2685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 01:52:30.795000 audit[2686]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:52:30.795000 audit[2686]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb8f39ab0 a2=0 a3=0 items=0 ppid=2652 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 01:52:30.805000 audit[2688]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.805000 audit[2688]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdee964240 a2=0 a3=0 items=0 ppid=2652 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 01:52:30.810000 audit[2689]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:52:30.810000 audit[2689]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffede9932e0 a2=0 a3=0 items=0 ppid=2652 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 01:52:30.810000 audit[2690]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:52:30.810000 audit[2690]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc3aecf00 a2=0 a3=0 items=0 ppid=2652 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 01:52:30.822000 audit[2691]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2691 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:52:30.822000 audit[2691]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe22d2a770 a2=0 a3=0 items=0 ppid=2652 pid=2691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:30.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 01:52:30.882901 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:52:30.895023 kubelet[2652]: E0120 01:52:30.892130 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:30.897273 kubelet[2652]: E0120 01:52:30.894508 2652 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:52:30.999881 kubelet[2652]: E0120 01:52:30.999158 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:31.025743 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:52:31.040482 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:52:31.075054 kubelet[2652]: E0120 01:52:31.071254 2652 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:52:31.076626 kubelet[2652]: I0120 01:52:31.076217 2652 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:52:31.076626 kubelet[2652]: I0120 01:52:31.076266 2652 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:52:31.078825 kubelet[2652]: E0120 01:52:31.077907 2652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Jan 20 01:52:31.078825 kubelet[2652]: I0120 01:52:31.078680 2652 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:52:31.108277 kubelet[2652]: E0120 01:52:31.106573 2652 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 20 01:52:31.153533 kubelet[2652]: E0120 01:52:31.147131 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:52:31.180759 kubelet[2652]: E0120 01:52:31.168327 2652 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:52:31.180759 kubelet[2652]: E0120 01:52:31.171393 2652 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:52:31.175596 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 20 01:52:31.201049 kubelet[2652]: E0120 01:52:31.201008 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:31.208400 kubelet[2652]: I0120 01:52:31.208161 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:52:31.208400 kubelet[2652]: I0120 01:52:31.208201 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:52:31.208400 kubelet[2652]: I0120 01:52:31.208244 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:52:31.208400 kubelet[2652]: I0120 01:52:31.208269 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a43bbb42a15116af39c1485fb7b13793-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a43bbb42a15116af39c1485fb7b13793\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:52:31.208400 kubelet[2652]: I0120 01:52:31.208292 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a43bbb42a15116af39c1485fb7b13793-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a43bbb42a15116af39c1485fb7b13793\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:52:31.208753 kubelet[2652]: I0120 01:52:31.208317 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a43bbb42a15116af39c1485fb7b13793-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a43bbb42a15116af39c1485fb7b13793\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:52:31.210044 kubelet[2652]: I0120 01:52:31.208832 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:52:31.210044 kubelet[2652]: I0120 01:52:31.208888 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:52:31.210044 kubelet[2652]: I0120 01:52:31.208935 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:52:31.225842 kubelet[2652]: I0120 01:52:31.225794 2652 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:52:31.226930 kubelet[2652]: E0120 01:52:31.226682 2652 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 20 01:52:31.227024 systemd[1]: Created slice kubepods-burstable-poda43bbb42a15116af39c1485fb7b13793.slice - libcontainer container kubepods-burstable-poda43bbb42a15116af39c1485fb7b13793.slice. Jan 20 01:52:31.240424 kubelet[2652]: E0120 01:52:31.240060 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:31.488805 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 20 01:52:31.507889 kubelet[2652]: E0120 01:52:31.505933 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:31.526159 kubelet[2652]: I0120 01:52:31.518760 2652 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:52:31.526159 kubelet[2652]: E0120 01:52:31.523821 2652 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 20 01:52:31.541778 kubelet[2652]: E0120 01:52:31.535037 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:31.547050 containerd[1641]: time="2026-01-20T01:52:31.540893033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 20 01:52:31.568406 kubelet[2652]: E0120 01:52:31.565054 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:31.568577 containerd[1641]: time="2026-01-20T01:52:31.566675897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 20 01:52:31.586583 kubelet[2652]: E0120 01:52:31.585743 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:31.588613 containerd[1641]: time="2026-01-20T01:52:31.588403585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a43bbb42a15116af39c1485fb7b13793,Namespace:kube-system,Attempt:0,}" Jan 20 01:52:31.700955 kubelet[2652]: E0120 01:52:31.700712 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:52:31.769137 kubelet[2652]: E0120 01:52:31.766979 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:52:31.907071 kubelet[2652]: E0120 01:52:31.905151 2652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Jan 20 01:52:32.017408 kubelet[2652]: I0120 01:52:32.005836 2652 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:52:32.024102 kubelet[2652]: E0120 01:52:32.022268 2652 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 20 01:52:32.351026 kubelet[2652]: E0120 01:52:32.340692 2652 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:52:32.351026 kubelet[2652]: E0120 01:52:32.349081 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:52:32.834643 kubelet[2652]: E0120 01:52:32.833960 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:52:32.875458 kubelet[2652]: I0120 01:52:32.874952 2652 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:52:32.875458 kubelet[2652]: E0120 01:52:32.879566 2652 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 20 01:52:33.322286 kubelet[2652]: E0120 01:52:33.322009 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:52:33.516989 kubelet[2652]: E0120 01:52:33.516584 2652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="3.2s" Jan 20 01:52:33.623032 kubelet[2652]: E0120 01:52:33.609484 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 01:52:34.176159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372561855.mount: Deactivated successfully. Jan 20 01:52:34.208458 containerd[1641]: time="2026-01-20T01:52:34.207324030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:52:34.223387 containerd[1641]: time="2026-01-20T01:52:34.222738895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=501" Jan 20 01:52:34.230037 containerd[1641]: time="2026-01-20T01:52:34.228845962Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:52:34.268847 containerd[1641]: time="2026-01-20T01:52:34.268206415Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:52:34.278719 containerd[1641]: time="2026-01-20T01:52:34.278549971Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:52:34.359227 containerd[1641]: time="2026-01-20T01:52:34.343924846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:52:34.372780 containerd[1641]: time="2026-01-20T01:52:34.371097204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.791050629s" Jan 20 01:52:34.387004 containerd[1641]: time="2026-01-20T01:52:34.386560570Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:52:34.392867 containerd[1641]: time="2026-01-20T01:52:34.392293177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 01:52:34.437835 containerd[1641]: time="2026-01-20T01:52:34.434002926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.841502806s" Jan 20 01:52:34.462203 containerd[1641]: time="2026-01-20T01:52:34.461279511Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.823098996s" Jan 20 01:52:34.490618 kubelet[2652]: I0120 01:52:34.489560 2652 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:52:34.490618 kubelet[2652]: E0120 01:52:34.490090 2652 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 20 01:52:34.625193 containerd[1641]: time="2026-01-20T01:52:34.621764446Z" level=info msg="connecting to shim 48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f" address="unix:///run/containerd/s/8657815239ac6d140d4580d8c2fab613500819d04b2f01f542d434354516f387" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:52:34.708768 containerd[1641]: time="2026-01-20T01:52:34.707035909Z" level=info msg="connecting to shim 0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac" address="unix:///run/containerd/s/0a3f5563c8ff7307acacc838c5a88e501fbbb018b2bb68d1db9fb5ba52ad8acb" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:52:34.760579 containerd[1641]: time="2026-01-20T01:52:34.758991428Z" level=info msg="connecting to shim 6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2" address="unix:///run/containerd/s/06d6fb9f305469caee1368fa1bc3e4acc99488023d233b837e62b90386453074" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:52:34.872571 systemd[1]: Started cri-containerd-48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f.scope - libcontainer container 48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f. Jan 20 01:52:34.910173 systemd[1]: Started cri-containerd-0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac.scope - libcontainer container 0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac. Jan 20 01:52:35.022804 systemd[1]: Started cri-containerd-6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2.scope - libcontainer container 6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2. Jan 20 01:52:35.044710 kubelet[2652]: E0120 01:52:35.044142 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 01:52:35.066496 kernel: kauditd_printk_skb: 27 callbacks suppressed Jan 20 01:52:35.066909 kernel: audit: type=1334 audit(1768873955.042:364): prog-id=83 op=LOAD Jan 20 01:52:35.067024 kernel: audit: type=1334 audit(1768873955.043:365): prog-id=84 op=LOAD Jan 20 01:52:35.042000 audit: BPF prog-id=83 op=LOAD Jan 20 01:52:35.043000 audit: BPF prog-id=84 op=LOAD Jan 20 01:52:35.140295 kernel: audit: type=1300 audit(1768873955.043:365): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.043000 audit: BPF prog-id=84 op=UNLOAD Jan 20 01:52:35.198183 kernel: audit: type=1327 audit(1768873955.043:365): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.198606 kernel: audit: type=1334 audit(1768873955.043:366): prog-id=84 op=UNLOAD Jan 20 01:52:35.198643 kernel: audit: type=1300 audit(1768873955.043:366): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.229416 kernel: audit: type=1327 audit(1768873955.043:366): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.256404 kernel: audit: type=1334 audit(1768873955.043:367): prog-id=85 op=LOAD Jan 20 01:52:35.256519 kernel: audit: type=1300 audit(1768873955.043:367): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.273568 kernel: audit: type=1327 audit(1768873955.043:367): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.043000 audit: BPF prog-id=85 op=LOAD Jan 20 01:52:35.043000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.043000 audit: BPF prog-id=86 op=LOAD Jan 20 01:52:35.043000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.043000 audit: BPF prog-id=86 op=UNLOAD Jan 20 01:52:35.043000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.043000 audit: BPF prog-id=85 op=UNLOAD Jan 20 01:52:35.043000 audit[2731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.043000 audit: BPF prog-id=87 op=LOAD Jan 20 01:52:35.043000 audit[2731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2705 pid=2731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438623262373766623832383337636465363536613838363338356162 Jan 20 01:52:35.062000 audit: BPF prog-id=88 op=LOAD Jan 20 01:52:35.062000 audit: BPF prog-id=89 op=LOAD Jan 20 01:52:35.062000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2725 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063613562343264326437333961386463306662666361386363306337 Jan 20 01:52:35.062000 audit: BPF prog-id=89 op=UNLOAD Jan 20 01:52:35.062000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063613562343264326437333961386463306662666361386363306337 Jan 20 01:52:35.062000 audit: BPF prog-id=90 op=LOAD Jan 20 01:52:35.062000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2725 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063613562343264326437333961386463306662666361386363306337 Jan 20 01:52:35.062000 audit: BPF prog-id=91 op=LOAD Jan 20 01:52:35.062000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2725 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063613562343264326437333961386463306662666361386363306337 Jan 20 01:52:35.062000 audit: BPF prog-id=91 op=UNLOAD Jan 20 01:52:35.062000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063613562343264326437333961386463306662666361386363306337 Jan 20 01:52:35.062000 audit: BPF prog-id=90 op=UNLOAD Jan 20 01:52:35.062000 audit[2757]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063613562343264326437333961386463306662666361386363306337 Jan 20 01:52:35.062000 audit: BPF prog-id=92 op=LOAD Jan 20 01:52:35.062000 audit[2757]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2725 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063613562343264326437333961386463306662666361386363306337 Jan 20 01:52:35.134000 audit: BPF prog-id=93 op=LOAD Jan 20 01:52:35.134000 audit: BPF prog-id=94 op=LOAD Jan 20 01:52:35.134000 audit[2775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2742 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666666636623665376336306663316364303661663563646338663132 Jan 20 01:52:35.134000 audit: BPF prog-id=94 op=UNLOAD Jan 20 01:52:35.134000 audit[2775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2742 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666666636623665376336306663316364303661663563646338663132 Jan 20 01:52:35.134000 audit: BPF prog-id=95 op=LOAD Jan 20 01:52:35.134000 audit[2775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2742 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666666636623665376336306663316364303661663563646338663132 Jan 20 01:52:35.291000 audit: BPF prog-id=96 op=LOAD Jan 20 01:52:35.291000 audit[2775]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2742 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666666636623665376336306663316364303661663563646338663132 Jan 20 01:52:35.291000 audit: BPF prog-id=96 op=UNLOAD Jan 20 01:52:35.291000 audit[2775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2742 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666666636623665376336306663316364303661663563646338663132 Jan 20 01:52:35.291000 audit: BPF prog-id=95 op=UNLOAD Jan 20 01:52:35.291000 audit[2775]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2742 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666666636623665376336306663316364303661663563646338663132 Jan 20 01:52:35.291000 audit: BPF prog-id=97 op=LOAD Jan 20 01:52:35.291000 audit[2775]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2742 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:35.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666666636623665376336306663316364303661663563646338663132 Jan 20 01:52:35.465449 containerd[1641]: time="2026-01-20T01:52:35.464282849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f\"" Jan 20 01:52:35.474963 kubelet[2652]: E0120 01:52:35.471240 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:35.506139 containerd[1641]: time="2026-01-20T01:52:35.506080342Z" level=info msg="CreateContainer within sandbox \"48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:52:35.512229 containerd[1641]: time="2026-01-20T01:52:35.507889908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac\"" Jan 20 01:52:35.517654 kubelet[2652]: E0120 01:52:35.515134 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:35.559394 containerd[1641]: time="2026-01-20T01:52:35.557382158Z" level=info msg="CreateContainer within sandbox \"0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:52:35.635848 containerd[1641]: time="2026-01-20T01:52:35.633801367Z" level=info msg="Container 641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:52:35.651444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720719338.mount: Deactivated successfully. Jan 20 01:52:35.707285 containerd[1641]: time="2026-01-20T01:52:35.707185660Z" level=info msg="CreateContainer within sandbox \"48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499\"" Jan 20 01:52:35.713874 containerd[1641]: time="2026-01-20T01:52:35.708315304Z" level=info msg="StartContainer for \"641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499\"" Jan 20 01:52:35.715467 containerd[1641]: time="2026-01-20T01:52:35.715427691Z" level=info msg="connecting to shim 641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499" address="unix:///run/containerd/s/8657815239ac6d140d4580d8c2fab613500819d04b2f01f542d434354516f387" protocol=ttrpc version=3 Jan 20 01:52:35.729019 containerd[1641]: time="2026-01-20T01:52:35.728226818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a43bbb42a15116af39c1485fb7b13793,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2\"" Jan 20 01:52:35.737971 kubelet[2652]: E0120 01:52:35.737242 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:35.742405 containerd[1641]: time="2026-01-20T01:52:35.742059093Z" level=info msg="Container 24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:52:35.763638 containerd[1641]: time="2026-01-20T01:52:35.763446403Z" level=info msg="CreateContainer within sandbox \"6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:52:35.780943 containerd[1641]: time="2026-01-20T01:52:35.780889731Z" level=info msg="CreateContainer within sandbox \"0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0\"" Jan 20 01:52:35.791710 containerd[1641]: time="2026-01-20T01:52:35.791609848Z" level=info msg="StartContainer for \"24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0\"" Jan 20 01:52:35.799805 containerd[1641]: time="2026-01-20T01:52:35.799678615Z" level=info msg="connecting to shim 24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0" address="unix:///run/containerd/s/0a3f5563c8ff7307acacc838c5a88e501fbbb018b2bb68d1db9fb5ba52ad8acb" protocol=ttrpc version=3 Jan 20 01:52:35.875838 systemd[1]: Started cri-containerd-641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499.scope - libcontainer container 641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499. Jan 20 01:52:35.894389 containerd[1641]: time="2026-01-20T01:52:35.891672860Z" level=info msg="Container 60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:52:35.936845 systemd[1]: Started cri-containerd-24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0.scope - libcontainer container 24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0. Jan 20 01:52:35.968661 containerd[1641]: time="2026-01-20T01:52:35.967241209Z" level=info msg="CreateContainer within sandbox \"6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f\"" Jan 20 01:52:35.985510 containerd[1641]: time="2026-01-20T01:52:35.982147504Z" level=info msg="StartContainer for \"60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f\"" Jan 20 01:52:35.997129 containerd[1641]: time="2026-01-20T01:52:35.996292601Z" level=info msg="connecting to shim 60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f" address="unix:///run/containerd/s/06d6fb9f305469caee1368fa1bc3e4acc99488023d233b837e62b90386453074" protocol=ttrpc version=3 Jan 20 01:52:36.082000 audit: BPF prog-id=98 op=LOAD Jan 20 01:52:36.126000 audit: BPF prog-id=99 op=LOAD Jan 20 01:52:36.126000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2705 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634316234616364616161373332613664313764616630336131626232 Jan 20 01:52:36.126000 audit: BPF prog-id=99 op=UNLOAD Jan 20 01:52:36.126000 audit[2833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634316234616364616161373332613664313764616630336131626232 Jan 20 01:52:36.126000 audit: BPF prog-id=100 op=LOAD Jan 20 01:52:36.126000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2705 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634316234616364616161373332613664313764616630336131626232 Jan 20 01:52:36.126000 audit: BPF prog-id=101 op=LOAD Jan 20 01:52:36.126000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2705 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634316234616364616161373332613664313764616630336131626232 Jan 20 01:52:36.126000 audit: BPF prog-id=101 op=UNLOAD Jan 20 01:52:36.126000 audit[2833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634316234616364616161373332613664313764616630336131626232 Jan 20 01:52:36.126000 audit: BPF prog-id=100 op=UNLOAD Jan 20 01:52:36.126000 audit[2833]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634316234616364616161373332613664313764616630336131626232 Jan 20 01:52:36.126000 audit: BPF prog-id=102 op=LOAD Jan 20 01:52:36.126000 audit[2833]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2705 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634316234616364616161373332613664313764616630336131626232 Jan 20 01:52:36.145310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223938327.mount: Deactivated successfully. Jan 20 01:52:36.226459 systemd[1]: Started cri-containerd-60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f.scope - libcontainer container 60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f. Jan 20 01:52:36.248000 audit: BPF prog-id=103 op=LOAD Jan 20 01:52:36.248000 audit: BPF prog-id=104 op=LOAD Jan 20 01:52:36.248000 audit[2844]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2725 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353035616265633563336533663361306361333237376233666462 Jan 20 01:52:36.248000 audit: BPF prog-id=104 op=UNLOAD Jan 20 01:52:36.248000 audit[2844]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353035616265633563336533663361306361333237376233666462 Jan 20 01:52:36.248000 audit: BPF prog-id=105 op=LOAD Jan 20 01:52:36.248000 audit[2844]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2725 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353035616265633563336533663361306361333237376233666462 Jan 20 01:52:36.248000 audit: BPF prog-id=106 op=LOAD Jan 20 01:52:36.248000 audit[2844]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2725 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353035616265633563336533663361306361333237376233666462 Jan 20 01:52:36.248000 audit: BPF prog-id=106 op=UNLOAD Jan 20 01:52:36.248000 audit[2844]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353035616265633563336533663361306361333237376233666462 Jan 20 01:52:36.248000 audit: BPF prog-id=105 op=UNLOAD Jan 20 01:52:36.248000 audit[2844]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353035616265633563336533663361306361333237376233666462 Jan 20 01:52:36.248000 audit: BPF prog-id=107 op=LOAD Jan 20 01:52:36.248000 audit[2844]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2725 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353035616265633563336533663361306361333237376233666462 Jan 20 01:52:36.661000 audit: BPF prog-id=108 op=LOAD Jan 20 01:52:36.663000 audit: BPF prog-id=109 op=LOAD Jan 20 01:52:36.663000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2742 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630616236666339363366303966656238336361376165336535366635 Jan 20 01:52:36.668000 audit: BPF prog-id=109 op=UNLOAD Jan 20 01:52:36.668000 audit[2871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2742 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630616236666339363366303966656238336361376165336535366635 Jan 20 01:52:36.669000 audit: BPF prog-id=110 op=LOAD Jan 20 01:52:36.669000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2742 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630616236666339363366303966656238336361376165336535366635 Jan 20 01:52:36.678813 kubelet[2652]: E0120 01:52:36.676892 2652 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 01:52:36.679000 audit: BPF prog-id=111 op=LOAD Jan 20 01:52:36.679000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2742 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630616236666339363366303966656238336361376165336535366635 Jan 20 01:52:36.679000 audit: BPF prog-id=111 op=UNLOAD Jan 20 01:52:36.679000 audit[2871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2742 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.679000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630616236666339363366303966656238336361376165336535366635 Jan 20 01:52:36.680000 audit: BPF prog-id=110 op=UNLOAD Jan 20 01:52:36.680000 audit[2871]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2742 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630616236666339363366303966656238336361376165336535366635 Jan 20 01:52:36.680000 audit: BPF prog-id=112 op=LOAD Jan 20 01:52:36.680000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2742 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:52:36.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630616236666339363366303966656238336361376165336535366635 Jan 20 01:52:36.725715 kubelet[2652]: E0120 01:52:36.725185 2652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="6.4s" Jan 20 01:52:37.274381 containerd[1641]: time="2026-01-20T01:52:37.273115723Z" level=info msg="StartContainer for \"641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499\" returns successfully" Jan 20 01:52:37.346483 containerd[1641]: time="2026-01-20T01:52:37.341786460Z" level=info msg="StartContainer for \"24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0\" returns successfully" Jan 20 01:52:37.673154 containerd[1641]: time="2026-01-20T01:52:37.659884080Z" level=info msg="StartContainer for \"60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f\" returns successfully" Jan 20 01:52:37.699913 kubelet[2652]: E0120 01:52:37.699833 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 01:52:37.731565 kubelet[2652]: I0120 01:52:37.731448 2652 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:52:37.754070 kubelet[2652]: E0120 01:52:37.747265 2652 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Jan 20 01:52:38.192679 kubelet[2652]: E0120 01:52:38.184245 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:38.192679 kubelet[2652]: E0120 01:52:38.184569 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:38.216702 kubelet[2652]: E0120 01:52:38.216659 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:38.220895 kubelet[2652]: E0120 01:52:38.220862 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:38.265010 kubelet[2652]: E0120 01:52:38.264929 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:38.265539 kubelet[2652]: E0120 01:52:38.265518 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:38.484191 kubelet[2652]: E0120 01:52:38.484054 2652 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 01:52:39.294115 kubelet[2652]: E0120 01:52:39.290870 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:39.294115 kubelet[2652]: E0120 01:52:39.291394 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:39.299119 kubelet[2652]: E0120 01:52:39.299060 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:39.299311 kubelet[2652]: E0120 01:52:39.299259 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:39.373391 kubelet[2652]: E0120 01:52:39.372488 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:39.373391 kubelet[2652]: E0120 01:52:39.372763 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:40.329822 kubelet[2652]: E0120 01:52:40.328628 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:40.329822 kubelet[2652]: E0120 01:52:40.329121 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:40.334861 kubelet[2652]: E0120 01:52:40.333555 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:40.335134 kubelet[2652]: E0120 01:52:40.334291 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:41.196041 kubelet[2652]: E0120 01:52:41.193265 2652 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 20 01:52:41.343149 kubelet[2652]: E0120 01:52:41.340892 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:41.343149 kubelet[2652]: E0120 01:52:41.341275 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:41.808196 kubelet[2652]: E0120 01:52:41.805882 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:41.813813 kubelet[2652]: E0120 01:52:41.810209 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:44.487246 kubelet[2652]: I0120 01:52:44.475654 2652 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:52:47.318558 kubelet[2652]: E0120 01:52:47.318469 2652 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 20 01:52:47.329274 kubelet[2652]: E0120 01:52:47.329008 2652 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 20 01:52:47.329274 kubelet[2652]: E0120 01:52:47.329202 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:47.382173 kubelet[2652]: I0120 01:52:47.382124 2652 apiserver.go:52] "Watching apiserver" Jan 20 01:52:47.680431 kubelet[2652]: I0120 01:52:47.674707 2652 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:52:47.964786 kubelet[2652]: I0120 01:52:47.942576 2652 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:52:47.964786 kubelet[2652]: E0120 01:52:47.943029 2652 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 20 01:52:47.964786 kubelet[2652]: E0120 01:52:47.963230 2652 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4d7c657da040 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:52:30.3237776 +0000 UTC m=+3.123748774,LastTimestamp:2026-01-20 01:52:30.3237776 +0000 UTC m=+3.123748774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:52:47.981630 kubelet[2652]: I0120 01:52:47.981130 2652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:52:50.474493 kubelet[2652]: E0120 01:52:50.441441 2652 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188c4d7c6dc7f3dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-20 01:52:30.462866396 +0000 UTC m=+3.262837570,LastTimestamp:2026-01-20 01:52:30.462866396 +0000 UTC m=+3.262837570,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 20 01:52:50.936686 kubelet[2652]: E0120 01:52:50.895785 2652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 20 01:52:50.936686 kubelet[2652]: I0120 01:52:50.926505 2652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:52:51.335116 kubelet[2652]: E0120 01:52:51.333719 2652 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 20 01:52:51.335116 kubelet[2652]: I0120 01:52:51.333865 2652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 20 01:52:51.880643 kubelet[2652]: I0120 01:52:51.843671 2652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 20 01:52:52.227567 kubelet[2652]: E0120 01:52:52.211267 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:52.716165 kubelet[2652]: E0120 01:52:52.690524 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:52.814759 kubelet[2652]: E0120 01:52:52.813799 2652 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-localhost\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" podUID="07ca0cbf79ad6ba9473d8e9f7715e571" pod="kube-system/kube-scheduler-localhost" Jan 20 01:52:53.953563 kubelet[2652]: E0120 01:52:53.915681 2652 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.124s" Jan 20 01:52:54.747447 kubelet[2652]: E0120 01:52:54.745556 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:54.819427 kubelet[2652]: E0120 01:52:54.798915 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:52:59.294863 kubelet[2652]: I0120 01:52:59.223484 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.223460945 podStartE2EDuration="7.223460945s" podCreationTimestamp="2026-01-20 01:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:52:59.22296363 +0000 UTC m=+32.022934794" watchObservedRunningTime="2026-01-20 01:52:59.223460945 +0000 UTC m=+32.023432110" Jan 20 01:52:59.294863 kubelet[2652]: I0120 01:52:59.223820 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.223808298 podStartE2EDuration="8.223808298s" podCreationTimestamp="2026-01-20 01:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:52:57.227635956 +0000 UTC m=+30.027607130" watchObservedRunningTime="2026-01-20 01:52:59.223808298 +0000 UTC m=+32.023779472" Jan 20 01:53:27.501981 kubelet[2652]: I0120 01:53:27.492256 2652 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:53:27.734780 kubelet[2652]: E0120 01:53:27.732588 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:28.078946 kubelet[2652]: E0120 01:53:28.078240 2652 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:29.013551 systemd[1]: Reload requested from client PID 2948 ('systemctl') (unit session-9.scope)... Jan 20 01:53:29.013605 systemd[1]: Reloading... Jan 20 01:53:30.269914 zram_generator::config[3000]: No configuration found. Jan 20 01:53:31.499901 systemd[1]: Reloading finished in 2485 ms. Jan 20 01:53:31.754315 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:53:31.832721 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:53:31.850669 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 20 01:53:31.850828 kernel: audit: type=1131 audit(1768874011.830:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:53:31.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:53:31.836864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:53:31.836955 systemd[1]: kubelet.service: Consumed 8.694s CPU time, 132.6M memory peak. Jan 20 01:53:31.872719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:53:31.883000 audit: BPF prog-id=113 op=LOAD Jan 20 01:53:31.923608 kernel: audit: type=1334 audit(1768874011.883:413): prog-id=113 op=LOAD Jan 20 01:53:31.923839 kernel: audit: type=1334 audit(1768874011.883:414): prog-id=69 op=UNLOAD Jan 20 01:53:31.883000 audit: BPF prog-id=69 op=UNLOAD Jan 20 01:53:31.945294 kernel: audit: type=1334 audit(1768874011.898:415): prog-id=114 op=LOAD Jan 20 01:53:31.898000 audit: BPF prog-id=114 op=LOAD Jan 20 01:53:31.900000 audit: BPF prog-id=72 op=UNLOAD Jan 20 01:53:31.900000 audit: BPF prog-id=115 op=LOAD Jan 20 01:53:31.900000 audit: BPF prog-id=116 op=LOAD Jan 20 01:53:31.900000 audit: BPF prog-id=73 op=UNLOAD Jan 20 01:53:31.946442 kernel: audit: type=1334 audit(1768874011.900:416): prog-id=72 op=UNLOAD Jan 20 01:53:31.946560 kernel: audit: type=1334 audit(1768874011.900:417): prog-id=115 op=LOAD Jan 20 01:53:31.946666 kernel: audit: type=1334 audit(1768874011.900:418): prog-id=116 op=LOAD Jan 20 01:53:31.958656 kernel: audit: type=1334 audit(1768874011.900:419): prog-id=73 op=UNLOAD Jan 20 01:53:31.958875 kernel: audit: type=1334 audit(1768874011.900:420): prog-id=74 op=UNLOAD Jan 20 01:53:31.958994 kernel: audit: type=1334 audit(1768874011.913:421): prog-id=117 op=LOAD Jan 20 01:53:31.900000 audit: BPF prog-id=74 op=UNLOAD Jan 20 01:53:31.913000 audit: BPF prog-id=117 op=LOAD Jan 20 01:53:31.913000 audit: BPF prog-id=63 op=UNLOAD Jan 20 01:53:31.914000 audit: BPF prog-id=118 op=LOAD Jan 20 01:53:31.914000 audit: BPF prog-id=119 op=LOAD Jan 20 01:53:31.914000 audit: BPF prog-id=64 op=UNLOAD Jan 20 01:53:31.914000 audit: BPF prog-id=65 op=UNLOAD Jan 20 01:53:31.930000 audit: BPF prog-id=120 op=LOAD Jan 20 01:53:31.930000 audit: BPF prog-id=66 op=UNLOAD Jan 20 01:53:31.930000 audit: BPF prog-id=121 op=LOAD Jan 20 01:53:31.931000 audit: BPF prog-id=122 op=LOAD Jan 20 01:53:31.931000 audit: BPF prog-id=67 op=UNLOAD Jan 20 01:53:31.931000 audit: BPF prog-id=68 op=UNLOAD Jan 20 01:53:31.934000 audit: BPF prog-id=123 op=LOAD Jan 20 01:53:31.934000 audit: BPF prog-id=82 op=UNLOAD Jan 20 01:53:31.951000 audit: BPF prog-id=124 op=LOAD Jan 20 01:53:31.951000 audit: BPF prog-id=79 op=UNLOAD Jan 20 01:53:31.951000 audit: BPF prog-id=125 op=LOAD Jan 20 01:53:31.951000 audit: BPF prog-id=126 op=LOAD Jan 20 01:53:31.951000 audit: BPF prog-id=80 op=UNLOAD Jan 20 01:53:31.951000 audit: BPF prog-id=81 op=UNLOAD Jan 20 01:53:31.974000 audit: BPF prog-id=127 op=LOAD Jan 20 01:53:31.977000 audit: BPF prog-id=128 op=LOAD Jan 20 01:53:31.977000 audit: BPF prog-id=70 op=UNLOAD Jan 20 01:53:31.977000 audit: BPF prog-id=71 op=UNLOAD Jan 20 01:53:31.977000 audit: BPF prog-id=129 op=LOAD Jan 20 01:53:31.977000 audit: BPF prog-id=78 op=UNLOAD Jan 20 01:53:31.991000 audit: BPF prog-id=130 op=LOAD Jan 20 01:53:31.991000 audit: BPF prog-id=75 op=UNLOAD Jan 20 01:53:31.991000 audit: BPF prog-id=131 op=LOAD Jan 20 01:53:31.991000 audit: BPF prog-id=132 op=LOAD Jan 20 01:53:31.991000 audit: BPF prog-id=76 op=UNLOAD Jan 20 01:53:31.991000 audit: BPF prog-id=77 op=UNLOAD Jan 20 01:53:33.796158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:53:33.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:53:33.888567 (kubelet)[3041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:53:34.622781 kubelet[3041]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:53:34.623544 kubelet[3041]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:53:34.623759 kubelet[3041]: I0120 01:53:34.623716 3041 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:53:34.762030 kubelet[3041]: I0120 01:53:34.758223 3041 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 20 01:53:34.762030 kubelet[3041]: I0120 01:53:34.758265 3041 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:53:34.762030 kubelet[3041]: I0120 01:53:34.758304 3041 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 20 01:53:34.762030 kubelet[3041]: I0120 01:53:34.758313 3041 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:53:34.798264 kubelet[3041]: I0120 01:53:34.763385 3041 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 01:53:34.798264 kubelet[3041]: I0120 01:53:34.765075 3041 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 01:53:34.830209 kubelet[3041]: I0120 01:53:34.829002 3041 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:53:34.906414 kubelet[3041]: I0120 01:53:34.903851 3041 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 01:53:34.997200 kubelet[3041]: I0120 01:53:34.991579 3041 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 20 01:53:34.997200 kubelet[3041]: I0120 01:53:34.991968 3041 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:53:34.997200 kubelet[3041]: I0120 01:53:34.992010 3041 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:53:34.997200 kubelet[3041]: I0120 01:53:34.992324 3041 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:53:34.997849 kubelet[3041]: I0120 01:53:34.992397 3041 container_manager_linux.go:306] "Creating device plugin manager" Jan 20 01:53:34.997849 kubelet[3041]: I0120 01:53:34.992442 3041 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 20 01:53:35.024979 kubelet[3041]: I0120 01:53:35.024130 3041 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:53:35.031602 kubelet[3041]: I0120 01:53:35.027758 3041 kubelet.go:475] "Attempting to sync node with API server" Jan 20 01:53:35.031602 kubelet[3041]: I0120 01:53:35.027850 3041 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:53:35.031602 kubelet[3041]: I0120 01:53:35.027886 3041 kubelet.go:387] "Adding apiserver pod source" Jan 20 01:53:35.031602 kubelet[3041]: I0120 01:53:35.027948 3041 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:53:35.047715 kubelet[3041]: I0120 01:53:35.046443 3041 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 01:53:35.062411 kubelet[3041]: I0120 01:53:35.052100 3041 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 01:53:35.062411 kubelet[3041]: I0120 01:53:35.058637 3041 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 20 01:53:35.138256 kubelet[3041]: I0120 01:53:35.137950 3041 server.go:1262] "Started kubelet" Jan 20 01:53:35.160297 kubelet[3041]: I0120 01:53:35.145445 3041 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:53:35.160297 kubelet[3041]: I0120 01:53:35.148313 3041 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:53:35.160297 kubelet[3041]: I0120 01:53:35.148425 3041 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 20 01:53:35.160297 kubelet[3041]: I0120 01:53:35.149014 3041 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:53:35.160297 kubelet[3041]: I0120 01:53:35.151140 3041 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:53:35.160297 kubelet[3041]: I0120 01:53:35.156740 3041 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:53:35.200684 kubelet[3041]: I0120 01:53:35.190277 3041 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 20 01:53:35.222438 kubelet[3041]: I0120 01:53:35.210064 3041 server.go:310] "Adding debug handlers to kubelet server" Jan 20 01:53:35.222438 kubelet[3041]: I0120 01:53:35.218913 3041 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 01:53:35.222438 kubelet[3041]: I0120 01:53:35.219209 3041 reconciler.go:29] "Reconciler: start to sync state" Jan 20 01:53:35.286609 kubelet[3041]: I0120 01:53:35.271707 3041 factory.go:223] Registration of the containerd container factory successfully Jan 20 01:53:35.286609 kubelet[3041]: I0120 01:53:35.271762 3041 factory.go:223] Registration of the systemd container factory successfully Jan 20 01:53:35.286609 kubelet[3041]: I0120 01:53:35.274738 3041 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:53:35.290162 kubelet[3041]: E0120 01:53:35.290086 3041 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:53:35.474768 kubelet[3041]: I0120 01:53:35.474283 3041 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 20 01:53:35.527656 kubelet[3041]: I0120 01:53:35.525789 3041 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 20 01:53:35.527656 kubelet[3041]: I0120 01:53:35.525871 3041 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 20 01:53:35.527656 kubelet[3041]: I0120 01:53:35.525912 3041 kubelet.go:2427] "Starting kubelet main sync loop" Jan 20 01:53:35.527656 kubelet[3041]: E0120 01:53:35.526011 3041 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:53:35.643146 kubelet[3041]: E0120 01:53:35.641708 3041 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:53:35.844738 kubelet[3041]: E0120 01:53:35.842141 3041 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:53:35.947065 kubelet[3041]: I0120 01:53:35.947025 3041 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:53:35.955227 kubelet[3041]: I0120 01:53:35.948785 3041 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:53:35.955227 kubelet[3041]: I0120 01:53:35.951934 3041 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:53:35.955227 kubelet[3041]: I0120 01:53:35.952205 3041 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:53:35.955227 kubelet[3041]: I0120 01:53:35.952222 3041 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:53:35.955227 kubelet[3041]: I0120 01:53:35.952246 3041 policy_none.go:49] "None policy: Start" Jan 20 01:53:35.955227 kubelet[3041]: I0120 01:53:35.952261 3041 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 20 01:53:35.955689 kubelet[3041]: I0120 01:53:35.955667 3041 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 20 01:53:35.961796 kubelet[3041]: I0120 01:53:35.961759 3041 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 20 01:53:35.962018 kubelet[3041]: I0120 01:53:35.961999 3041 policy_none.go:47] "Start" Jan 20 01:53:35.984449 kubelet[3041]: E0120 01:53:35.984214 3041 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 01:53:35.984788 kubelet[3041]: I0120 01:53:35.984765 3041 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:53:35.984992 kubelet[3041]: I0120 01:53:35.984946 3041 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:53:35.986927 kubelet[3041]: I0120 01:53:35.986321 3041 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:53:35.993469 kubelet[3041]: E0120 01:53:35.991239 3041 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:53:35.994267 kubelet[3041]: I0120 01:53:35.994241 3041 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:53:36.007028 containerd[1641]: time="2026-01-20T01:53:36.001168998Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:53:36.012803 kubelet[3041]: I0120 01:53:36.009065 3041 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:53:36.034733 kubelet[3041]: I0120 01:53:36.032491 3041 apiserver.go:52] "Watching apiserver" Jan 20 01:53:36.170182 kubelet[3041]: I0120 01:53:36.169440 3041 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 20 01:53:36.267002 kubelet[3041]: I0120 01:53:36.264182 3041 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 20 01:53:36.335765 kubelet[3041]: I0120 01:53:36.327006 3041 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 01:53:36.361452 kubelet[3041]: I0120 01:53:36.360311 3041 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 20 01:53:36.361452 kubelet[3041]: I0120 01:53:36.360488 3041 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 20 01:53:36.413035 kubelet[3041]: E0120 01:53:36.412461 3041 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 20 01:53:36.413035 kubelet[3041]: I0120 01:53:36.412649 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a43bbb42a15116af39c1485fb7b13793-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a43bbb42a15116af39c1485fb7b13793\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:53:36.413035 kubelet[3041]: I0120 01:53:36.412674 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a43bbb42a15116af39c1485fb7b13793-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a43bbb42a15116af39c1485fb7b13793\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:53:36.413035 kubelet[3041]: I0120 01:53:36.412704 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a43bbb42a15116af39c1485fb7b13793-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a43bbb42a15116af39c1485fb7b13793\") " pod="kube-system/kube-apiserver-localhost" Jan 20 01:53:36.413035 kubelet[3041]: I0120 01:53:36.412732 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:53:36.413035 kubelet[3041]: I0120 01:53:36.412756 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:53:36.413479 kubelet[3041]: I0120 01:53:36.412780 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:53:36.413479 kubelet[3041]: I0120 01:53:36.412803 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 20 01:53:36.413479 kubelet[3041]: I0120 01:53:36.412874 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:53:36.413479 kubelet[3041]: I0120 01:53:36.412904 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 20 01:53:36.587770 kubelet[3041]: E0120 01:53:36.580816 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:36.587770 kubelet[3041]: E0120 01:53:36.581373 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:36.648507 kubelet[3041]: E0120 01:53:36.645525 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:36.721544 kubelet[3041]: E0120 01:53:36.721081 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:38.286658 kubelet[3041]: E0120 01:53:38.277576 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:39.823831 kubelet[3041]: E0120 01:53:39.823765 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:39.892312 kubelet[3041]: E0120 01:53:39.884600 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:39.892312 kubelet[3041]: E0120 01:53:39.885198 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:42.025788 kubelet[3041]: E0120 01:53:42.024914 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:42.103294 kubelet[3041]: E0120 01:53:42.082430 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:43.140186 kubelet[3041]: E0120 01:53:43.134539 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:43.284267 kubelet[3041]: E0120 01:53:43.279092 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:43.388532 kubelet[3041]: I0120 01:53:43.386286 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59326da8-cddd-46d5-add5-23c4ad72a387-xtables-lock\") pod \"kube-proxy-9rjl4\" (UID: \"59326da8-cddd-46d5-add5-23c4ad72a387\") " pod="kube-system/kube-proxy-9rjl4" Jan 20 01:53:43.388532 kubelet[3041]: I0120 01:53:43.386415 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59326da8-cddd-46d5-add5-23c4ad72a387-kube-proxy\") pod \"kube-proxy-9rjl4\" (UID: \"59326da8-cddd-46d5-add5-23c4ad72a387\") " pod="kube-system/kube-proxy-9rjl4" Jan 20 01:53:43.388532 kubelet[3041]: I0120 01:53:43.386443 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59326da8-cddd-46d5-add5-23c4ad72a387-lib-modules\") pod \"kube-proxy-9rjl4\" (UID: \"59326da8-cddd-46d5-add5-23c4ad72a387\") " pod="kube-system/kube-proxy-9rjl4" Jan 20 01:53:43.388532 kubelet[3041]: I0120 01:53:43.386466 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspgr\" (UniqueName: \"kubernetes.io/projected/59326da8-cddd-46d5-add5-23c4ad72a387-kube-api-access-dspgr\") pod \"kube-proxy-9rjl4\" (UID: \"59326da8-cddd-46d5-add5-23c4ad72a387\") " pod="kube-system/kube-proxy-9rjl4" Jan 20 01:53:43.516393 systemd[1]: Created slice kubepods-besteffort-pod59326da8_cddd_46d5_add5_23c4ad72a387.slice - libcontainer container kubepods-besteffort-pod59326da8_cddd_46d5_add5_23c4ad72a387.slice. Jan 20 01:53:44.610643 kubelet[3041]: E0120 01:53:44.601047 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:44.651239 containerd[1641]: time="2026-01-20T01:53:44.650818759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rjl4,Uid:59326da8-cddd-46d5-add5-23c4ad72a387,Namespace:kube-system,Attempt:0,}" Jan 20 01:53:45.289489 containerd[1641]: time="2026-01-20T01:53:45.287905468Z" level=info msg="connecting to shim b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102" address="unix:///run/containerd/s/219a58f211f34a6b298a7937f7c0a5a32bf29c12b4d2fcee6a9fe63a2de4da2e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:53:46.012508 systemd[1]: Started cri-containerd-b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102.scope - libcontainer container b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102. Jan 20 01:53:46.611000 audit: BPF prog-id=133 op=LOAD Jan 20 01:53:46.733684 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 01:53:46.733798 kernel: audit: type=1334 audit(1768874026.611:454): prog-id=133 op=LOAD Jan 20 01:53:46.823000 audit: BPF prog-id=134 op=LOAD Jan 20 01:53:47.021960 kernel: audit: type=1334 audit(1768874026.823:455): prog-id=134 op=LOAD Jan 20 01:53:47.398769 kernel: audit: type=1300 audit(1768874026.823:455): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:47.398889 kernel: audit: type=1327 audit(1768874026.823:455): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:47.398926 kernel: audit: type=1334 audit(1768874026.823:456): prog-id=134 op=UNLOAD Jan 20 01:53:47.399029 kernel: audit: type=1300 audit(1768874026.823:456): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:47.415116 kernel: audit: type=1327 audit(1768874026.823:456): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:47.415546 kernel: audit: type=1334 audit(1768874026.823:457): prog-id=135 op=LOAD Jan 20 01:53:47.415607 kernel: audit: type=1300 audit(1768874026.823:457): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:46.823000 audit[3111]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:46.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:46.823000 audit: BPF prog-id=134 op=UNLOAD Jan 20 01:53:46.823000 audit[3111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:46.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:46.823000 audit: BPF prog-id=135 op=LOAD Jan 20 01:53:46.823000 audit[3111]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:47.498491 kernel: audit: type=1327 audit(1768874026.823:457): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:46.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:46.823000 audit: BPF prog-id=136 op=LOAD Jan 20 01:53:46.823000 audit[3111]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:46.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:46.823000 audit: BPF prog-id=136 op=UNLOAD Jan 20 01:53:46.823000 audit[3111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:46.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:46.823000 audit: BPF prog-id=135 op=UNLOAD Jan 20 01:53:46.823000 audit[3111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:46.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:46.823000 audit: BPF prog-id=137 op=LOAD Jan 20 01:53:46.823000 audit[3111]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3100 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:46.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233636561616362316365306633643639313765663533383265393163 Jan 20 01:53:48.745392 containerd[1641]: time="2026-01-20T01:53:48.741928579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rjl4,Uid:59326da8-cddd-46d5-add5-23c4ad72a387,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102\"" Jan 20 01:53:48.783655 kubelet[3041]: E0120 01:53:48.777915 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:48.909905 containerd[1641]: time="2026-01-20T01:53:48.901938543Z" level=info msg="CreateContainer within sandbox \"b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:53:49.568738 containerd[1641]: time="2026-01-20T01:53:49.568679153Z" level=info msg="Container 7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:53:50.224005 containerd[1641]: time="2026-01-20T01:53:50.223833168Z" level=info msg="CreateContainer within sandbox \"b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643\"" Jan 20 01:53:50.297979 containerd[1641]: time="2026-01-20T01:53:50.297856655Z" level=info msg="StartContainer for \"7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643\"" Jan 20 01:53:50.416439 kubelet[3041]: E0120 01:53:50.414751 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:50.432563 containerd[1641]: time="2026-01-20T01:53:50.432510677Z" level=info msg="connecting to shim 7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643" address="unix:///run/containerd/s/219a58f211f34a6b298a7937f7c0a5a32bf29c12b4d2fcee6a9fe63a2de4da2e" protocol=ttrpc version=3 Jan 20 01:53:51.071632 systemd[1]: Started cri-containerd-7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643.scope - libcontainer container 7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643. Jan 20 01:53:51.705000 audit: BPF prog-id=138 op=LOAD Jan 20 01:53:51.713507 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:53:51.713607 kernel: audit: type=1334 audit(1768874031.705:462): prog-id=138 op=LOAD Jan 20 01:53:51.744437 kernel: audit: type=1300 audit(1768874031.705:462): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:51.705000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:51.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:51.943448 kernel: audit: type=1327 audit(1768874031.705:462): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:51.943772 kernel: audit: type=1334 audit(1768874031.705:463): prog-id=139 op=LOAD Jan 20 01:53:51.705000 audit: BPF prog-id=139 op=LOAD Jan 20 01:53:51.705000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:52.021546 kernel: audit: type=1300 audit(1768874031.705:463): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:51.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:52.120526 kernel: audit: type=1327 audit(1768874031.705:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:51.705000 audit: BPF prog-id=139 op=UNLOAD Jan 20 01:53:52.248946 kernel: audit: type=1334 audit(1768874031.705:464): prog-id=139 op=UNLOAD Jan 20 01:53:52.263845 kernel: audit: type=1300 audit(1768874031.705:464): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:52.263967 kernel: audit: type=1327 audit(1768874031.705:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:51.705000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:51.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:52.391415 kernel: audit: type=1334 audit(1768874031.705:465): prog-id=138 op=UNLOAD Jan 20 01:53:51.705000 audit: BPF prog-id=138 op=UNLOAD Jan 20 01:53:51.705000 audit[3145]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:51.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:51.705000 audit: BPF prog-id=140 op=LOAD Jan 20 01:53:51.705000 audit[3145]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3100 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:51.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766616335343438316131333833363536336162333363313163373539 Jan 20 01:53:53.959958 containerd[1641]: time="2026-01-20T01:53:53.954872616Z" level=info msg="StartContainer for \"7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643\" returns successfully" Jan 20 01:53:54.458960 systemd[1]: Created slice kubepods-besteffort-podc60a94b4_9886_4101_9758_7a8caa1ce174.slice - libcontainer container kubepods-besteffort-podc60a94b4_9886_4101_9758_7a8caa1ce174.slice. Jan 20 01:53:54.519436 kubelet[3041]: I0120 01:53:54.518906 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7588\" (UniqueName: \"kubernetes.io/projected/c60a94b4-9886-4101-9758-7a8caa1ce174-kube-api-access-c7588\") pod \"tigera-operator-65cdcdfd6d-n7xvb\" (UID: \"c60a94b4-9886-4101-9758-7a8caa1ce174\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-n7xvb" Jan 20 01:53:54.519436 kubelet[3041]: I0120 01:53:54.519109 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c60a94b4-9886-4101-9758-7a8caa1ce174-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-n7xvb\" (UID: \"c60a94b4-9886-4101-9758-7a8caa1ce174\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-n7xvb" Jan 20 01:53:55.085886 kubelet[3041]: E0120 01:53:55.085294 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:55.269876 systemd[1722]: Created slice background.slice - User Background Tasks Slice. Jan 20 01:53:55.287223 systemd[1722]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 20 01:53:55.316215 containerd[1641]: time="2026-01-20T01:53:55.313665189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-n7xvb,Uid:c60a94b4-9886-4101-9758-7a8caa1ce174,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:53:55.471440 kubelet[3041]: I0120 01:53:55.468970 3041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rjl4" podStartSLOduration=20.468946369 podStartE2EDuration="20.468946369s" podCreationTimestamp="2026-01-20 01:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:53:55.437632548 +0000 UTC m=+21.442321485" watchObservedRunningTime="2026-01-20 01:53:55.468946369 +0000 UTC m=+21.473635265" Jan 20 01:53:55.499603 systemd[1722]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 20 01:53:55.639740 containerd[1641]: time="2026-01-20T01:53:55.639459328Z" level=info msg="connecting to shim 53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23" address="unix:///run/containerd/s/5acd7907c2dc94f045e844aa3f740a0c25654e34d6d1b291eff1e6c7c6cb1edf" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:53:55.893013 systemd[1]: Started cri-containerd-53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23.scope - libcontainer container 53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23. Jan 20 01:53:56.108398 kubelet[3041]: E0120 01:53:56.106170 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:53:56.158000 audit: BPF prog-id=141 op=LOAD Jan 20 01:53:56.166000 audit: BPF prog-id=142 op=LOAD Jan 20 01:53:56.166000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3205 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533616537336633366135646237343732656233656165343333326362 Jan 20 01:53:56.166000 audit: BPF prog-id=142 op=UNLOAD Jan 20 01:53:56.166000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3205 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533616537336633366135646237343732656233656165343333326362 Jan 20 01:53:56.166000 audit: BPF prog-id=143 op=LOAD Jan 20 01:53:56.166000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3205 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533616537336633366135646237343732656233656165343333326362 Jan 20 01:53:56.166000 audit: BPF prog-id=144 op=LOAD Jan 20 01:53:56.166000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3205 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533616537336633366135646237343732656233656165343333326362 Jan 20 01:53:56.166000 audit: BPF prog-id=144 op=UNLOAD Jan 20 01:53:56.166000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3205 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533616537336633366135646237343732656233656165343333326362 Jan 20 01:53:56.166000 audit: BPF prog-id=143 op=UNLOAD Jan 20 01:53:56.166000 audit[3216]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3205 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533616537336633366135646237343732656233656165343333326362 Jan 20 01:53:56.167000 audit: BPF prog-id=145 op=LOAD Jan 20 01:53:56.167000 audit[3216]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3205 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.167000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533616537336633366135646237343732656233656165343333326362 Jan 20 01:53:56.672399 containerd[1641]: time="2026-01-20T01:53:56.668392030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-n7xvb,Uid:c60a94b4-9886-4101-9758-7a8caa1ce174,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23\"" Jan 20 01:53:56.684000 audit[3268]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:56.684000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9d63b660 a2=0 a3=7ffd9d63b64c items=0 ppid=3158 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 01:53:56.699289 containerd[1641]: time="2026-01-20T01:53:56.695142386Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:53:56.699000 audit[3269]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:56.699000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3bad6bb0 a2=0 a3=7ffc3bad6b9c items=0 ppid=3158 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.717135 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 01:53:56.717314 kernel: audit: type=1327 audit(1768874036.699:476): proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:53:56.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:53:56.753000 audit[3272]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:56.788462 kernel: audit: type=1325 audit(1768874036.753:477): table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:56.753000 audit[3272]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff72f55ca0 a2=0 a3=7fff72f55c8c items=0 ppid=3158 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.846636 kernel: audit: type=1300 audit(1768874036.753:477): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff72f55ca0 a2=0 a3=7fff72f55c8c items=0 ppid=3158 pid=3272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 01:53:56.902522 kernel: audit: type=1327 audit(1768874036.753:477): proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 01:53:56.902670 kernel: audit: type=1325 audit(1768874036.774:478): table=nat:57 family=10 entries=1 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:56.774000 audit[3274]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:56.947833 kernel: audit: type=1300 audit(1768874036.774:478): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8d6a9a10 a2=0 a3=7fff8d6a99fc items=0 ppid=3158 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.774000 audit[3274]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8d6a9a10 a2=0 a3=7fff8d6a99fc items=0 ppid=3158 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.023330 kernel: audit: type=1327 audit(1768874036.774:478): proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:53:56.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 01:53:56.800000 audit[3277]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:56.800000 audit[3277]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcadd23e90 a2=0 a3=7ffcadd23e7c items=0 ppid=3158 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.083455 kernel: audit: type=1325 audit(1768874036.800:479): table=filter:58 family=10 entries=1 op=nft_register_chain pid=3277 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:57.083595 kernel: audit: type=1300 audit(1768874036.800:479): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcadd23e90 a2=0 a3=7ffcadd23e7c items=0 ppid=3158 pid=3277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.083905 kernel: audit: type=1327 audit(1768874036.800:479): proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 01:53:56.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 01:53:56.815000 audit[3273]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3273 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:56.815000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc76471a0 a2=0 a3=7fffc764718c items=0 ppid=3158 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 01:53:56.899000 audit[3279]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:56.899000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe87d6c9e0 a2=0 a3=7ffe87d6c9cc items=0 ppid=3158 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:56.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 01:53:57.059000 audit[3281]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3281 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.059000 audit[3281]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff71448990 a2=0 a3=7fff7144897c items=0 ppid=3158 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 20 01:53:57.075000 audit[3284]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.075000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffede4fe7c0 a2=0 a3=7ffede4fe7ac items=0 ppid=3158 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 20 01:53:57.081000 audit[3285]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.081000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4dee6d00 a2=0 a3=7ffd4dee6cec items=0 ppid=3158 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 01:53:57.104000 audit[3287]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.104000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff421747c0 a2=0 a3=7fff421747ac items=0 ppid=3158 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 01:53:57.130000 audit[3288]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.130000 audit[3288]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd367888b0 a2=0 a3=7ffd3678889c items=0 ppid=3158 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 01:53:57.145000 audit[3290]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.145000 audit[3290]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc9ce0e0f0 a2=0 a3=7ffc9ce0e0dc items=0 ppid=3158 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:57.185000 audit[3293]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.185000 audit[3293]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc11c9c860 a2=0 a3=7ffc11c9c84c items=0 ppid=3158 pid=3293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:57.204000 audit[3294]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.204000 audit[3294]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed2192550 a2=0 a3=7ffed219253c items=0 ppid=3158 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 01:53:57.221000 audit[3296]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.221000 audit[3296]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff30b4e190 a2=0 a3=7fff30b4e17c items=0 ppid=3158 pid=3296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 01:53:57.239000 audit[3297]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.239000 audit[3297]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce2c2dce0 a2=0 a3=7ffce2c2dccc items=0 ppid=3158 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 01:53:57.260000 audit[3299]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.260000 audit[3299]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe51556e00 a2=0 a3=7ffe51556dec items=0 ppid=3158 pid=3299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 20 01:53:57.284000 audit[3302]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.284000 audit[3302]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc02c9a990 a2=0 a3=7ffc02c9a97c items=0 ppid=3158 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.284000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 20 01:53:57.306000 audit[3305]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.306000 audit[3305]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd59e4d070 a2=0 a3=7ffd59e4d05c items=0 ppid=3158 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 20 01:53:57.321000 audit[3306]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.321000 audit[3306]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffffe1ce0e0 a2=0 a3=7ffffe1ce0cc items=0 ppid=3158 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.321000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 01:53:57.340000 audit[3308]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.340000 audit[3308]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc96bf8d60 a2=0 a3=7ffc96bf8d4c items=0 ppid=3158 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:57.377000 audit[3311]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.377000 audit[3311]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6331a080 a2=0 a3=7fff6331a06c items=0 ppid=3158 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:57.399000 audit[3312]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.399000 audit[3312]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe646b22c0 a2=0 a3=7ffe646b22ac items=0 ppid=3158 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 01:53:57.412000 audit[3314]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 01:53:57.412000 audit[3314]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe5e32a4a0 a2=0 a3=7ffe5e32a48c items=0 ppid=3158 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 01:53:57.724000 audit[3320]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:53:57.724000 audit[3320]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd5dc10df0 a2=0 a3=7ffd5dc10ddc items=0 ppid=3158 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:53:57.788000 audit[3320]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:53:57.788000 audit[3320]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd5dc10df0 a2=0 a3=7ffd5dc10ddc items=0 ppid=3158 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:53:57.790000 audit[3326]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3326 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:57.790000 audit[3326]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd8bfd8250 a2=0 a3=7ffd8bfd823c items=0 ppid=3158 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 01:53:57.866000 audit[3328]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:57.866000 audit[3328]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff20526c80 a2=0 a3=7fff20526c6c items=0 ppid=3158 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 20 01:53:57.962000 audit[3331]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3331 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:57.962000 audit[3331]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdcd679d30 a2=0 a3=7ffdcd679d1c items=0 ppid=3158 pid=3331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 20 01:53:57.981000 audit[3332]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:57.981000 audit[3332]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6182f910 a2=0 a3=7ffc6182f8fc items=0 ppid=3158 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:57.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 01:53:58.108000 audit[3334]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.108000 audit[3334]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1cd81a90 a2=0 a3=7ffc1cd81a7c items=0 ppid=3158 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 01:53:58.122000 audit[3335]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.122000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffac1d59d0 a2=0 a3=7fffac1d59bc items=0 ppid=3158 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 01:53:58.192000 audit[3337]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.192000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe903070f0 a2=0 a3=7ffe903070dc items=0 ppid=3158 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:58.270000 audit[3340]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3340 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.270000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffef3c580c0 a2=0 a3=7ffef3c580ac items=0 ppid=3158 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:58.276000 audit[3341]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3341 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.276000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5d0fe180 a2=0 a3=7fff5d0fe16c items=0 ppid=3158 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 01:53:58.316000 audit[3343]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.316000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde70ea1a0 a2=0 a3=7ffde70ea18c items=0 ppid=3158 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.316000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 01:53:58.319000 audit[3344]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3344 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.319000 audit[3344]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd451525b0 a2=0 a3=7ffd4515259c items=0 ppid=3158 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 01:53:58.330000 audit[3346]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3346 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.330000 audit[3346]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6a607bc0 a2=0 a3=7ffc6a607bac items=0 ppid=3158 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 20 01:53:58.392000 audit[3349]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.392000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6fa12bc0 a2=0 a3=7fff6fa12bac items=0 ppid=3158 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 20 01:53:58.426000 audit[3352]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.426000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc49009ea0 a2=0 a3=7ffc49009e8c items=0 ppid=3158 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 20 01:53:58.454000 audit[3353]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.454000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0f61d910 a2=0 a3=7fff0f61d8fc items=0 ppid=3158 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 01:53:58.544000 audit[3355]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.544000 audit[3355]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc07570380 a2=0 a3=7ffc0757036c items=0 ppid=3158 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.544000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:58.622000 audit[3358]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.622000 audit[3358]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff05a6f220 a2=0 a3=7fff05a6f20c items=0 ppid=3158 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 01:53:58.639000 audit[3359]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.639000 audit[3359]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8e15acb0 a2=0 a3=7ffd8e15ac9c items=0 ppid=3158 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.639000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 01:53:58.680000 audit[3361]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.680000 audit[3361]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc438b3dc0 a2=0 a3=7ffc438b3dac items=0 ppid=3158 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 01:53:58.684000 audit[3362]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.684000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff45fcc660 a2=0 a3=7fff45fcc64c items=0 ppid=3158 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 01:53:58.706000 audit[3364]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.706000 audit[3364]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe50f4cf90 a2=0 a3=7ffe50f4cf7c items=0 ppid=3158 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.706000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:53:58.792000 audit[3368]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 01:53:58.792000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe4b05bee0 a2=0 a3=7ffe4b05becc items=0 ppid=3158 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.792000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 01:53:58.872000 audit[3370]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 01:53:58.872000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff9fb28f90 a2=0 a3=7fff9fb28f7c items=0 ppid=3158 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.872000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:53:58.873000 audit[3370]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 01:53:58.873000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff9fb28f90 a2=0 a3=7fff9fb28f7c items=0 ppid=3158 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:53:58.873000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:01.462908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236198786.mount: Deactivated successfully. Jan 20 01:54:14.502447 containerd[1641]: time="2026-01-20T01:54:14.502230846Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:54:14.507430 containerd[1641]: time="2026-01-20T01:54:14.507240118Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25052948" Jan 20 01:54:14.510050 containerd[1641]: time="2026-01-20T01:54:14.509808978Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:54:14.517032 containerd[1641]: time="2026-01-20T01:54:14.515313086Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:54:14.517032 containerd[1641]: time="2026-01-20T01:54:14.516758534Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 17.821544384s" Jan 20 01:54:14.517032 containerd[1641]: time="2026-01-20T01:54:14.516794202Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 01:54:14.544859 containerd[1641]: time="2026-01-20T01:54:14.540825614Z" level=info msg="CreateContainer within sandbox \"53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:54:14.683079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135116702.mount: Deactivated successfully. Jan 20 01:54:14.717091 containerd[1641]: time="2026-01-20T01:54:14.716192845Z" level=info msg="Container 9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:54:14.826443 containerd[1641]: time="2026-01-20T01:54:14.826222878Z" level=info msg="CreateContainer within sandbox \"53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b\"" Jan 20 01:54:14.838137 containerd[1641]: time="2026-01-20T01:54:14.835404933Z" level=info msg="StartContainer for \"9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b\"" Jan 20 01:54:14.884015 containerd[1641]: time="2026-01-20T01:54:14.876722757Z" level=info msg="connecting to shim 9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b" address="unix:///run/containerd/s/5acd7907c2dc94f045e844aa3f740a0c25654e34d6d1b291eff1e6c7c6cb1edf" protocol=ttrpc version=3 Jan 20 01:54:15.152656 systemd[1]: Started cri-containerd-9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b.scope - libcontainer container 9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b. Jan 20 01:54:15.397846 kernel: kauditd_printk_skb: 138 callbacks suppressed Jan 20 01:54:15.397988 kernel: audit: type=1334 audit(1768874055.374:526): prog-id=146 op=LOAD Jan 20 01:54:15.374000 audit: BPF prog-id=146 op=LOAD Jan 20 01:54:15.408000 audit: BPF prog-id=147 op=LOAD Jan 20 01:54:15.461212 kernel: audit: type=1334 audit(1768874055.408:527): prog-id=147 op=LOAD Jan 20 01:54:15.461447 kernel: audit: type=1300 audit(1768874055.408:527): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.408000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.539215 kernel: audit: type=1327 audit(1768874055.408:527): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.611454 kernel: audit: type=1334 audit(1768874055.408:528): prog-id=147 op=UNLOAD Jan 20 01:54:15.408000 audit: BPF prog-id=147 op=UNLOAD Jan 20 01:54:15.408000 audit[3382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.689171 kernel: audit: type=1300 audit(1768874055.408:528): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.689305 kernel: audit: type=1327 audit(1768874055.408:528): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.723523 kernel: audit: type=1334 audit(1768874055.408:529): prog-id=148 op=LOAD Jan 20 01:54:15.408000 audit: BPF prog-id=148 op=LOAD Jan 20 01:54:15.408000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.798267 kernel: audit: type=1300 audit(1768874055.408:529): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.812898 kernel: audit: type=1327 audit(1768874055.408:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.408000 audit: BPF prog-id=149 op=LOAD Jan 20 01:54:15.408000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.408000 audit: BPF prog-id=149 op=UNLOAD Jan 20 01:54:15.408000 audit[3382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.408000 audit: BPF prog-id=148 op=UNLOAD Jan 20 01:54:15.408000 audit[3382]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.408000 audit: BPF prog-id=150 op=LOAD Jan 20 01:54:15.408000 audit[3382]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3205 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:15.408000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963656362663865326233343331633137666231393331666235633635 Jan 20 01:54:15.882184 containerd[1641]: time="2026-01-20T01:54:15.881732192Z" level=info msg="StartContainer for \"9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b\" returns successfully" Jan 20 01:54:17.297861 kubelet[3041]: I0120 01:54:17.288445 3041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-n7xvb" podStartSLOduration=6.441788416 podStartE2EDuration="24.288424255s" podCreationTimestamp="2026-01-20 01:53:53 +0000 UTC" firstStartedPulling="2026-01-20 01:53:56.678460432 +0000 UTC m=+22.683149328" lastFinishedPulling="2026-01-20 01:54:14.525096261 +0000 UTC m=+40.529785167" observedRunningTime="2026-01-20 01:54:17.248286067 +0000 UTC m=+43.252974963" watchObservedRunningTime="2026-01-20 01:54:17.288424255 +0000 UTC m=+43.293113161" Jan 20 01:54:48.647784 kubelet[3041]: E0120 01:54:48.616785 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.035s" Jan 20 01:54:49.575237 sudo[1864]: pam_unix(sudo:session): session closed for user root Jan 20 01:54:49.575000 audit[1864]: USER_END pid=1864 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:54:49.614289 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:54:49.679293 kernel: audit: type=1106 audit(1768874089.575:534): pid=1864 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:54:49.679714 kernel: audit: type=1104 audit(1768874089.575:535): pid=1864 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:54:49.575000 audit[1864]: CRED_DISP pid=1864 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 01:54:49.696508 sshd[1863]: Connection closed by 10.0.0.1 port 34082 Jan 20 01:54:49.708424 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Jan 20 01:54:49.764000 audit[1860]: USER_END pid=1860 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:54:49.800246 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:34082.service: Deactivated successfully. Jan 20 01:54:49.778000 audit[1860]: CRED_DISP pid=1860 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:54:49.829761 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:54:49.845729 systemd[1]: session-9.scope: Consumed 25.310s CPU time, 220.7M memory peak. Jan 20 01:54:49.888602 systemd-logind[1624]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:54:49.900895 systemd-logind[1624]: Removed session 9. Jan 20 01:54:49.912677 kernel: audit: type=1106 audit(1768874089.764:536): pid=1860 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:54:49.912827 kernel: audit: type=1104 audit(1768874089.778:537): pid=1860 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 01:54:49.912868 kernel: audit: type=1131 audit(1768874089.800:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.48:22-10.0.0.1:34082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:54:49.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.48:22-10.0.0.1:34082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 01:54:53.549315 kubelet[3041]: E0120 01:54:53.546836 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:54.475000 audit[3481]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:54.520028 kernel: audit: type=1325 audit(1768874094.475:539): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:54.531184 kubelet[3041]: E0120 01:54:54.528805 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:54:54.475000 audit[3481]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc3b92e990 a2=0 a3=7ffc3b92e97c items=0 ppid=3158 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:54.640515 kernel: audit: type=1300 audit(1768874094.475:539): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc3b92e990 a2=0 a3=7ffc3b92e97c items=0 ppid=3158 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:54.640659 kernel: audit: type=1327 audit(1768874094.475:539): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:54.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:54.531000 audit[3481]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:54.712892 kernel: audit: type=1325 audit(1768874094.531:540): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3481 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:54.713049 kernel: audit: type=1300 audit(1768874094.531:540): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3b92e990 a2=0 a3=0 items=0 ppid=3158 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:54.531000 audit[3481]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3b92e990 a2=0 a3=0 items=0 ppid=3158 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:54.823818 kernel: audit: type=1327 audit(1768874094.531:540): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:54.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:56.044000 audit[3483]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:56.090443 kernel: audit: type=1325 audit(1768874096.044:541): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:56.044000 audit[3483]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8c50ec40 a2=0 a3=7ffe8c50ec2c items=0 ppid=3158 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:56.171276 kernel: audit: type=1300 audit(1768874096.044:541): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8c50ec40 a2=0 a3=7ffe8c50ec2c items=0 ppid=3158 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:56.171505 kernel: audit: type=1327 audit(1768874096.044:541): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:56.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:56.193550 kernel: audit: type=1325 audit(1768874096.191:542): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:56.191000 audit[3483]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3483 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:54:56.191000 audit[3483]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8c50ec40 a2=0 a3=0 items=0 ppid=3158 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:56.281647 kernel: audit: type=1300 audit(1768874096.191:542): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8c50ec40 a2=0 a3=0 items=0 ppid=3158 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:54:56.191000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:56.335409 kernel: audit: type=1327 audit(1768874096.191:542): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:54:56.534135 kubelet[3041]: E0120 01:54:56.534057 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:04.542022 kubelet[3041]: E0120 01:55:04.540046 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:17.623000 audit[3488]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:17.678212 kernel: audit: type=1325 audit(1768874117.623:543): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:17.623000 audit[3488]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff7acc9520 a2=0 a3=7fff7acc950c items=0 ppid=3158 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:17.783005 kernel: audit: type=1300 audit(1768874117.623:543): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff7acc9520 a2=0 a3=7fff7acc950c items=0 ppid=3158 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:17.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:17.778000 audit[3488]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:17.902442 kernel: audit: type=1327 audit(1768874117.623:543): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:17.902586 kernel: audit: type=1325 audit(1768874117.778:544): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:17.902624 kernel: audit: type=1300 audit(1768874117.778:544): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7acc9520 a2=0 a3=0 items=0 ppid=3158 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:17.778000 audit[3488]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7acc9520 a2=0 a3=0 items=0 ppid=3158 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:17.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:18.060287 kernel: audit: type=1327 audit(1768874117.778:544): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:19.187000 audit[3491]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:19.318222 kernel: audit: type=1325 audit(1768874119.187:545): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:19.318305 kernel: audit: type=1300 audit(1768874119.187:545): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe4215a00 a2=0 a3=7fffe42159ec items=0 ppid=3158 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:19.187000 audit[3491]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe4215a00 a2=0 a3=7fffe42159ec items=0 ppid=3158 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:19.405147 kernel: audit: type=1327 audit(1768874119.187:545): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:19.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:19.413000 audit[3491]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:19.413000 audit[3491]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe4215a00 a2=0 a3=0 items=0 ppid=3158 pid=3491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:19.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:19.469375 kernel: audit: type=1325 audit(1768874119.413:546): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:31.969000 audit[3495]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:31.983437 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 01:55:31.991552 kernel: audit: type=1325 audit(1768874131.969:547): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:31.969000 audit[3495]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd33f76e70 a2=0 a3=7ffd33f76e5c items=0 ppid=3158 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:32.067251 kernel: audit: type=1300 audit(1768874131.969:547): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd33f76e70 a2=0 a3=7ffd33f76e5c items=0 ppid=3158 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:31.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:32.087972 kernel: audit: type=1327 audit(1768874131.969:547): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:32.088092 kernel: audit: type=1325 audit(1768874132.068:548): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:32.068000 audit[3495]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:32.068000 audit[3495]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd33f76e70 a2=0 a3=0 items=0 ppid=3158 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:32.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:32.203311 kernel: audit: type=1300 audit(1768874132.068:548): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd33f76e70 a2=0 a3=0 items=0 ppid=3158 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:32.205484 kernel: audit: type=1327 audit(1768874132.068:548): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:32.716784 systemd[1]: Created slice kubepods-besteffort-podd51a7876_69fd_4f4d_b9d9_431ad16c54d4.slice - libcontainer container kubepods-besteffort-podd51a7876_69fd_4f4d_b9d9_431ad16c54d4.slice. Jan 20 01:55:32.780198 kubelet[3041]: I0120 01:55:32.779640 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d51a7876-69fd-4f4d-b9d9-431ad16c54d4-tigera-ca-bundle\") pod \"calico-typha-7b6465767c-mj5np\" (UID: \"d51a7876-69fd-4f4d-b9d9-431ad16c54d4\") " pod="calico-system/calico-typha-7b6465767c-mj5np" Jan 20 01:55:32.780198 kubelet[3041]: I0120 01:55:32.779685 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d51a7876-69fd-4f4d-b9d9-431ad16c54d4-typha-certs\") pod \"calico-typha-7b6465767c-mj5np\" (UID: \"d51a7876-69fd-4f4d-b9d9-431ad16c54d4\") " pod="calico-system/calico-typha-7b6465767c-mj5np" Jan 20 01:55:32.780198 kubelet[3041]: I0120 01:55:32.779714 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntn2\" (UniqueName: \"kubernetes.io/projected/d51a7876-69fd-4f4d-b9d9-431ad16c54d4-kube-api-access-bntn2\") pod \"calico-typha-7b6465767c-mj5np\" (UID: \"d51a7876-69fd-4f4d-b9d9-431ad16c54d4\") " pod="calico-system/calico-typha-7b6465767c-mj5np" Jan 20 01:55:33.192000 audit[3498]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:33.230445 kernel: audit: type=1325 audit(1768874133.192:549): table=filter:115 family=2 entries=22 op=nft_register_rule pid=3498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:33.230601 kernel: audit: type=1300 audit(1768874133.192:549): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeb1ff08e0 a2=0 a3=7ffeb1ff08cc items=0 ppid=3158 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:33.192000 audit[3498]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffeb1ff08e0 a2=0 a3=7ffeb1ff08cc items=0 ppid=3158 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:33.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:33.373856 kernel: audit: type=1327 audit(1768874133.192:549): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:33.377411 kernel: audit: type=1325 audit(1768874133.230:550): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:33.230000 audit[3498]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:33.230000 audit[3498]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb1ff08e0 a2=0 a3=0 items=0 ppid=3158 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:33.230000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:33.651539 kubelet[3041]: E0120 01:55:33.650705 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:33.652328 containerd[1641]: time="2026-01-20T01:55:33.651988241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b6465767c-mj5np,Uid:d51a7876-69fd-4f4d-b9d9-431ad16c54d4,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:34.082823 containerd[1641]: time="2026-01-20T01:55:34.082724154Z" level=info msg="connecting to shim f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070" address="unix:///run/containerd/s/3e76a89a30b191993faf0e42cb384ac2eada6cb3adfcfe7e3a07877e74f05e8c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:55:34.128749 kubelet[3041]: I0120 01:55:34.122105 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-cni-log-dir\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.128749 kubelet[3041]: I0120 01:55:34.123589 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-lib-modules\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.128749 kubelet[3041]: I0120 01:55:34.123617 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-cni-net-dir\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.128749 kubelet[3041]: I0120 01:55:34.123641 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d8b6849-9937-4535-93d9-4504fe17d156-tigera-ca-bundle\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.128749 kubelet[3041]: I0120 01:55:34.123664 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-var-lib-calico\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.131204 kubelet[3041]: I0120 01:55:34.123688 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-var-run-calico\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.131204 kubelet[3041]: I0120 01:55:34.123711 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-xtables-lock\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.131204 kubelet[3041]: I0120 01:55:34.123741 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47pw9\" (UniqueName: \"kubernetes.io/projected/0d8b6849-9937-4535-93d9-4504fe17d156-kube-api-access-47pw9\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.131204 kubelet[3041]: I0120 01:55:34.123769 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-policysync\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.131204 kubelet[3041]: I0120 01:55:34.123792 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0d8b6849-9937-4535-93d9-4504fe17d156-node-certs\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.137722 kubelet[3041]: I0120 01:55:34.131587 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-flexvol-driver-host\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.137722 kubelet[3041]: I0120 01:55:34.135489 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0d8b6849-9937-4535-93d9-4504fe17d156-cni-bin-dir\") pod \"calico-node-p5ws6\" (UID: \"0d8b6849-9937-4535-93d9-4504fe17d156\") " pod="calico-system/calico-node-p5ws6" Jan 20 01:55:34.143508 systemd[1]: Created slice kubepods-besteffort-pod0d8b6849_9937_4535_93d9_4504fe17d156.slice - libcontainer container kubepods-besteffort-pod0d8b6849_9937_4535_93d9_4504fe17d156.slice. Jan 20 01:55:34.263013 kubelet[3041]: E0120 01:55:34.261721 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.263013 kubelet[3041]: W0120 01:55:34.261826 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.263013 kubelet[3041]: E0120 01:55:34.261931 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.265645 kubelet[3041]: E0120 01:55:34.263450 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.265645 kubelet[3041]: W0120 01:55:34.263539 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.265645 kubelet[3041]: E0120 01:55:34.263557 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.265645 kubelet[3041]: E0120 01:55:34.264035 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.265645 kubelet[3041]: W0120 01:55:34.264124 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.265645 kubelet[3041]: E0120 01:55:34.264140 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.298418 kubelet[3041]: E0120 01:55:34.273827 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.298418 kubelet[3041]: W0120 01:55:34.273849 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.298418 kubelet[3041]: E0120 01:55:34.273872 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.298418 kubelet[3041]: E0120 01:55:34.289695 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.298418 kubelet[3041]: W0120 01:55:34.289720 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.298418 kubelet[3041]: E0120 01:55:34.289750 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.321900 kubelet[3041]: E0120 01:55:34.318831 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.321900 kubelet[3041]: W0120 01:55:34.318933 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.321900 kubelet[3041]: E0120 01:55:34.319041 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.329154 systemd[1]: Started cri-containerd-f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070.scope - libcontainer container f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070. Jan 20 01:55:34.431591 kubelet[3041]: E0120 01:55:34.431483 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.431767 kubelet[3041]: W0120 01:55:34.431742 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.431857 kubelet[3041]: E0120 01:55:34.431842 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.442000 audit[3561]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:34.442000 audit[3561]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff9aa7f150 a2=0 a3=7fff9aa7f13c items=0 ppid=3158 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.442000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:34.468000 audit[3561]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:34.468000 audit[3561]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff9aa7f150 a2=0 a3=0 items=0 ppid=3158 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:34.506000 audit: BPF prog-id=151 op=LOAD Jan 20 01:55:34.507000 audit: BPF prog-id=152 op=LOAD Jan 20 01:55:34.507000 audit[3519]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323535396439666432656636633835336565386366333830613333 Jan 20 01:55:34.507000 audit: BPF prog-id=152 op=UNLOAD Jan 20 01:55:34.507000 audit[3519]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323535396439666432656636633835336565386366333830613333 Jan 20 01:55:34.507000 audit: BPF prog-id=153 op=LOAD Jan 20 01:55:34.507000 audit[3519]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323535396439666432656636633835336565386366333830613333 Jan 20 01:55:34.507000 audit: BPF prog-id=154 op=LOAD Jan 20 01:55:34.507000 audit[3519]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323535396439666432656636633835336565386366333830613333 Jan 20 01:55:34.507000 audit: BPF prog-id=154 op=UNLOAD Jan 20 01:55:34.507000 audit[3519]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323535396439666432656636633835336565386366333830613333 Jan 20 01:55:34.507000 audit: BPF prog-id=153 op=UNLOAD Jan 20 01:55:34.507000 audit[3519]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323535396439666432656636633835336565386366333830613333 Jan 20 01:55:34.507000 audit: BPF prog-id=155 op=LOAD Jan 20 01:55:34.507000 audit[3519]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3508 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:34.507000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323535396439666432656636633835336565386366333830613333 Jan 20 01:55:34.539088 containerd[1641]: time="2026-01-20T01:55:34.531742227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p5ws6,Uid:0d8b6849-9937-4535-93d9-4504fe17d156,Namespace:calico-system,Attempt:0,}" Jan 20 01:55:34.539155 kubelet[3041]: E0120 01:55:34.530895 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:34.619069 kubelet[3041]: E0120 01:55:34.617407 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:34.672910 kubelet[3041]: E0120 01:55:34.672598 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.672910 kubelet[3041]: W0120 01:55:34.672641 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.672910 kubelet[3041]: E0120 01:55:34.672674 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.680717 kubelet[3041]: E0120 01:55:34.680456 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.680717 kubelet[3041]: W0120 01:55:34.680484 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.680717 kubelet[3041]: E0120 01:55:34.680510 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.684513 kubelet[3041]: E0120 01:55:34.682175 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.684696 kubelet[3041]: W0120 01:55:34.684670 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.684789 kubelet[3041]: E0120 01:55:34.684773 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.685612 kubelet[3041]: E0120 01:55:34.685416 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.685612 kubelet[3041]: W0120 01:55:34.685433 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.685612 kubelet[3041]: E0120 01:55:34.685451 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.686111 kubelet[3041]: E0120 01:55:34.686095 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.686256 kubelet[3041]: W0120 01:55:34.686241 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.686419 kubelet[3041]: E0120 01:55:34.686403 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.687055 kubelet[3041]: E0120 01:55:34.686896 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.687055 kubelet[3041]: W0120 01:55:34.686910 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.687055 kubelet[3041]: E0120 01:55:34.686923 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.687537 kubelet[3041]: E0120 01:55:34.687520 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.687630 kubelet[3041]: W0120 01:55:34.687614 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.687691 kubelet[3041]: E0120 01:55:34.687679 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.688686 kubelet[3041]: E0120 01:55:34.688633 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.688686 kubelet[3041]: W0120 01:55:34.688648 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.688686 kubelet[3041]: E0120 01:55:34.688661 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.696939 kubelet[3041]: E0120 01:55:34.696712 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.696939 kubelet[3041]: W0120 01:55:34.696741 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.696939 kubelet[3041]: E0120 01:55:34.696767 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.709693 kubelet[3041]: E0120 01:55:34.700170 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.709693 kubelet[3041]: W0120 01:55:34.700193 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.709693 kubelet[3041]: E0120 01:55:34.700275 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.709693 kubelet[3041]: E0120 01:55:34.701187 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.709693 kubelet[3041]: W0120 01:55:34.701200 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.709693 kubelet[3041]: E0120 01:55:34.701289 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.729011 kubelet[3041]: E0120 01:55:34.702288 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.729011 kubelet[3041]: W0120 01:55:34.728928 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.729011 kubelet[3041]: E0120 01:55:34.728969 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.762396 kubelet[3041]: E0120 01:55:34.751281 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.784100 kubelet[3041]: W0120 01:55:34.783480 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.784100 kubelet[3041]: E0120 01:55:34.783535 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.784100 kubelet[3041]: I0120 01:55:34.784036 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ac1c9092-8cef-4868-9089-0927692efc39-registration-dir\") pod \"csi-node-driver-9gv2m\" (UID: \"ac1c9092-8cef-4868-9089-0927692efc39\") " pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:55:34.801783 kubelet[3041]: E0120 01:55:34.801271 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.801783 kubelet[3041]: W0120 01:55:34.801387 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.801783 kubelet[3041]: E0120 01:55:34.801426 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.807965 kubelet[3041]: E0120 01:55:34.807854 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.807965 kubelet[3041]: W0120 01:55:34.807881 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.807965 kubelet[3041]: E0120 01:55:34.807906 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.815501 kubelet[3041]: E0120 01:55:34.815463 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.815639 kubelet[3041]: W0120 01:55:34.815617 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.815735 kubelet[3041]: E0120 01:55:34.815713 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.826259 kubelet[3041]: E0120 01:55:34.826216 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.826525 kubelet[3041]: W0120 01:55:34.826502 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.826618 kubelet[3041]: E0120 01:55:34.826604 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.826810 kubelet[3041]: I0120 01:55:34.826787 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ac1c9092-8cef-4868-9089-0927692efc39-socket-dir\") pod \"csi-node-driver-9gv2m\" (UID: \"ac1c9092-8cef-4868-9089-0927692efc39\") " pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:55:34.829912 kubelet[3041]: E0120 01:55:34.829889 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.830017 kubelet[3041]: W0120 01:55:34.830000 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.830116 kubelet[3041]: E0120 01:55:34.830099 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.833994 kubelet[3041]: E0120 01:55:34.833970 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.834098 kubelet[3041]: W0120 01:55:34.834078 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.834247 kubelet[3041]: E0120 01:55:34.834226 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.836081 kubelet[3041]: E0120 01:55:34.836062 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.836655 kubelet[3041]: W0120 01:55:34.836635 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.836749 kubelet[3041]: E0120 01:55:34.836731 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.844196 kubelet[3041]: E0120 01:55:34.843115 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.844407 kubelet[3041]: W0120 01:55:34.844384 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.844526 kubelet[3041]: E0120 01:55:34.844510 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.844895 kubelet[3041]: E0120 01:55:34.844881 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.844981 kubelet[3041]: W0120 01:55:34.844968 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.845048 kubelet[3041]: E0120 01:55:34.845035 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.845395 kubelet[3041]: I0120 01:55:34.845255 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ac1c9092-8cef-4868-9089-0927692efc39-kubelet-dir\") pod \"csi-node-driver-9gv2m\" (UID: \"ac1c9092-8cef-4868-9089-0927692efc39\") " pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:55:34.846863 kubelet[3041]: E0120 01:55:34.846844 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.846971 kubelet[3041]: W0120 01:55:34.846955 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.847049 kubelet[3041]: E0120 01:55:34.847034 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.850510 kubelet[3041]: E0120 01:55:34.850492 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.850606 kubelet[3041]: W0120 01:55:34.850590 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.850699 kubelet[3041]: E0120 01:55:34.850682 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.851173 kubelet[3041]: E0120 01:55:34.851158 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.851289 kubelet[3041]: W0120 01:55:34.851252 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.851289 kubelet[3041]: E0120 01:55:34.851274 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.851845 kubelet[3041]: E0120 01:55:34.851805 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.851845 kubelet[3041]: W0120 01:55:34.851819 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.851845 kubelet[3041]: E0120 01:55:34.851831 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.860517 kubelet[3041]: E0120 01:55:34.860296 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.860517 kubelet[3041]: W0120 01:55:34.860404 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.860517 kubelet[3041]: E0120 01:55:34.860420 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.860930 kubelet[3041]: E0120 01:55:34.860807 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.860930 kubelet[3041]: W0120 01:55:34.860820 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.860930 kubelet[3041]: E0120 01:55:34.860832 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.861211 kubelet[3041]: E0120 01:55:34.861197 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.863027 kubelet[3041]: W0120 01:55:34.861281 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.863169 kubelet[3041]: E0120 01:55:34.861300 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.965660 kubelet[3041]: E0120 01:55:34.962465 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.965660 kubelet[3041]: W0120 01:55:34.962494 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.965660 kubelet[3041]: E0120 01:55:34.962521 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.965660 kubelet[3041]: I0120 01:55:34.965479 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ac1c9092-8cef-4868-9089-0927692efc39-varrun\") pod \"csi-node-driver-9gv2m\" (UID: \"ac1c9092-8cef-4868-9089-0927692efc39\") " pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:55:34.968871 kubelet[3041]: E0120 01:55:34.968788 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.968871 kubelet[3041]: W0120 01:55:34.968842 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.968871 kubelet[3041]: E0120 01:55:34.968869 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.971453 kubelet[3041]: E0120 01:55:34.969416 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.971453 kubelet[3041]: W0120 01:55:34.969430 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.971453 kubelet[3041]: E0120 01:55:34.969446 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.971453 kubelet[3041]: E0120 01:55:34.969827 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.971453 kubelet[3041]: W0120 01:55:34.969838 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.971453 kubelet[3041]: E0120 01:55:34.969852 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:34.987408 kubelet[3041]: E0120 01:55:34.985447 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:34.987408 kubelet[3041]: W0120 01:55:34.985691 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:34.987408 kubelet[3041]: E0120 01:55:34.985719 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.038629 kubelet[3041]: E0120 01:55:35.026142 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.038629 kubelet[3041]: W0120 01:55:35.026254 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.038629 kubelet[3041]: E0120 01:55:35.026546 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.038863 containerd[1641]: time="2026-01-20T01:55:35.029457816Z" level=info msg="connecting to shim 311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490" address="unix:///run/containerd/s/f1ce25e1af03caa5247a82fb35c320afb52c8a8db9cfefbf21616e96e20aec4d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:55:35.077717 kubelet[3041]: E0120 01:55:35.074625 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.077717 kubelet[3041]: W0120 01:55:35.074678 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.077717 kubelet[3041]: E0120 01:55:35.074708 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.094473 kubelet[3041]: E0120 01:55:35.092484 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.094473 kubelet[3041]: W0120 01:55:35.092535 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.094473 kubelet[3041]: E0120 01:55:35.092565 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.108283 kubelet[3041]: E0120 01:55:35.107899 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.108283 kubelet[3041]: W0120 01:55:35.107928 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.108283 kubelet[3041]: E0120 01:55:35.107954 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.110731 kubelet[3041]: E0120 01:55:35.110630 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.110731 kubelet[3041]: W0120 01:55:35.110680 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.110731 kubelet[3041]: E0120 01:55:35.110704 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.116453 kubelet[3041]: E0120 01:55:35.112156 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.116453 kubelet[3041]: W0120 01:55:35.112193 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.116453 kubelet[3041]: E0120 01:55:35.112212 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.116453 kubelet[3041]: I0120 01:55:35.112390 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmg9\" (UniqueName: \"kubernetes.io/projected/ac1c9092-8cef-4868-9089-0927692efc39-kube-api-access-2dmg9\") pod \"csi-node-driver-9gv2m\" (UID: \"ac1c9092-8cef-4868-9089-0927692efc39\") " pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:55:35.137602 kubelet[3041]: E0120 01:55:35.125687 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.137602 kubelet[3041]: W0120 01:55:35.125721 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.137602 kubelet[3041]: E0120 01:55:35.125749 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.137602 kubelet[3041]: E0120 01:55:35.136704 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.137602 kubelet[3041]: W0120 01:55:35.136726 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.137602 kubelet[3041]: E0120 01:55:35.136762 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.149472 kubelet[3041]: E0120 01:55:35.146537 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.149472 kubelet[3041]: W0120 01:55:35.146570 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.149472 kubelet[3041]: E0120 01:55:35.146599 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.149472 kubelet[3041]: E0120 01:55:35.148993 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.149472 kubelet[3041]: W0120 01:55:35.149009 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.149472 kubelet[3041]: E0120 01:55:35.149029 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.166250 kubelet[3041]: E0120 01:55:35.155864 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.166250 kubelet[3041]: W0120 01:55:35.155890 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.166250 kubelet[3041]: E0120 01:55:35.155922 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.188726 kubelet[3041]: E0120 01:55:35.179885 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.188726 kubelet[3041]: W0120 01:55:35.179911 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.188726 kubelet[3041]: E0120 01:55:35.179938 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.188991 containerd[1641]: time="2026-01-20T01:55:35.185448582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b6465767c-mj5np,Uid:d51a7876-69fd-4f4d-b9d9-431ad16c54d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070\"" Jan 20 01:55:35.191708 kubelet[3041]: E0120 01:55:35.189285 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.191708 kubelet[3041]: W0120 01:55:35.189311 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.191708 kubelet[3041]: E0120 01:55:35.191234 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.194570 kubelet[3041]: E0120 01:55:35.194548 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.198763 kubelet[3041]: W0120 01:55:35.194673 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.198763 kubelet[3041]: E0120 01:55:35.194702 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.198763 kubelet[3041]: E0120 01:55:35.196010 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.198763 kubelet[3041]: W0120 01:55:35.196023 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.198763 kubelet[3041]: E0120 01:55:35.196041 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.198763 kubelet[3041]: E0120 01:55:35.196300 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:35.200608 kubelet[3041]: E0120 01:55:35.200535 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.200608 kubelet[3041]: W0120 01:55:35.200552 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.200608 kubelet[3041]: E0120 01:55:35.200572 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.214423 containerd[1641]: time="2026-01-20T01:55:35.213539353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:55:35.221510 kubelet[3041]: E0120 01:55:35.219736 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.221510 kubelet[3041]: W0120 01:55:35.219784 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.221510 kubelet[3041]: E0120 01:55:35.219813 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.221510 kubelet[3041]: E0120 01:55:35.220827 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.221510 kubelet[3041]: W0120 01:55:35.220840 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.221510 kubelet[3041]: E0120 01:55:35.220861 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.229449 kubelet[3041]: E0120 01:55:35.229130 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.229449 kubelet[3041]: W0120 01:55:35.229180 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.229449 kubelet[3041]: E0120 01:55:35.229203 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.247446 kubelet[3041]: E0120 01:55:35.242665 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.247446 kubelet[3041]: W0120 01:55:35.242748 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.247446 kubelet[3041]: E0120 01:55:35.242776 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.251478 kubelet[3041]: E0120 01:55:35.251440 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.251478 kubelet[3041]: W0120 01:55:35.251465 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.251610 kubelet[3041]: E0120 01:55:35.251495 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.270444 kubelet[3041]: E0120 01:55:35.270234 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.270444 kubelet[3041]: W0120 01:55:35.270279 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.270444 kubelet[3041]: E0120 01:55:35.270305 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.277199 kubelet[3041]: E0120 01:55:35.277173 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.277617 kubelet[3041]: W0120 01:55:35.277304 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.277617 kubelet[3041]: E0120 01:55:35.277402 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.281233 kubelet[3041]: E0120 01:55:35.281213 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.281494 kubelet[3041]: W0120 01:55:35.281309 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.281765 kubelet[3041]: E0120 01:55:35.281728 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.286435 kubelet[3041]: E0120 01:55:35.283793 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.286435 kubelet[3041]: W0120 01:55:35.283835 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.286435 kubelet[3041]: E0120 01:55:35.283856 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.286435 kubelet[3041]: E0120 01:55:35.284538 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.286435 kubelet[3041]: W0120 01:55:35.284550 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.286435 kubelet[3041]: E0120 01:55:35.284564 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:35.289646 systemd[1]: Started cri-containerd-311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490.scope - libcontainer container 311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490. Jan 20 01:55:35.307274 kubelet[3041]: E0120 01:55:35.306997 3041 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Jan 20 01:55:35.432000 audit: BPF prog-id=156 op=LOAD Jan 20 01:55:35.440000 audit: BPF prog-id=157 op=LOAD Jan 20 01:55:35.440000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3606 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:35.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316234373336306436613733373638633864613530326566323961 Jan 20 01:55:35.440000 audit: BPF prog-id=157 op=UNLOAD Jan 20 01:55:35.440000 audit[3636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:35.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316234373336306436613733373638633864613530326566323961 Jan 20 01:55:35.440000 audit: BPF prog-id=158 op=LOAD Jan 20 01:55:35.440000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3606 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:35.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316234373336306436613733373638633864613530326566323961 Jan 20 01:55:35.440000 audit: BPF prog-id=159 op=LOAD Jan 20 01:55:35.440000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3606 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:35.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316234373336306436613733373638633864613530326566323961 Jan 20 01:55:35.475000 audit: BPF prog-id=159 op=UNLOAD Jan 20 01:55:35.475000 audit[3636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316234373336306436613733373638633864613530326566323961 Jan 20 01:55:35.475000 audit: BPF prog-id=158 op=UNLOAD Jan 20 01:55:35.475000 audit[3636]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316234373336306436613733373638633864613530326566323961 Jan 20 01:55:35.475000 audit: BPF prog-id=160 op=LOAD Jan 20 01:55:35.475000 audit[3636]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3606 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:35.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331316234373336306436613733373638633864613530326566323961 Jan 20 01:55:35.851326 kubelet[3041]: E0120 01:55:35.851122 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:35.851326 kubelet[3041]: W0120 01:55:35.851211 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:35.851326 kubelet[3041]: E0120 01:55:35.851247 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.030054 containerd[1641]: time="2026-01-20T01:55:36.029880736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p5ws6,Uid:0d8b6849-9937-4535-93d9-4504fe17d156,Namespace:calico-system,Attempt:0,} returns sandbox id \"311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490\"" Jan 20 01:55:36.036469 kubelet[3041]: E0120 01:55:36.034028 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:36.532003 kubelet[3041]: E0120 01:55:36.531948 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:36.655946 kubelet[3041]: E0120 01:55:36.647315 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.655946 kubelet[3041]: W0120 01:55:36.655233 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.655946 kubelet[3041]: E0120 01:55:36.655427 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.668697 kubelet[3041]: E0120 01:55:36.664907 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.668697 kubelet[3041]: W0120 01:55:36.664961 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.668697 kubelet[3041]: E0120 01:55:36.665001 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.670190 kubelet[3041]: E0120 01:55:36.669805 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.670190 kubelet[3041]: W0120 01:55:36.669826 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.670190 kubelet[3041]: E0120 01:55:36.669851 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.682967 kubelet[3041]: E0120 01:55:36.681327 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.682967 kubelet[3041]: W0120 01:55:36.681462 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.682967 kubelet[3041]: E0120 01:55:36.681579 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.697586 kubelet[3041]: E0120 01:55:36.687537 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.697586 kubelet[3041]: W0120 01:55:36.687591 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.697586 kubelet[3041]: E0120 01:55:36.687621 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.714009 kubelet[3041]: E0120 01:55:36.704008 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.714009 kubelet[3041]: W0120 01:55:36.704040 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.714009 kubelet[3041]: E0120 01:55:36.704071 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.727188 kubelet[3041]: E0120 01:55:36.721631 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.727188 kubelet[3041]: W0120 01:55:36.723257 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.727188 kubelet[3041]: E0120 01:55:36.723615 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.788238 kubelet[3041]: E0120 01:55:36.782472 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.788238 kubelet[3041]: W0120 01:55:36.782503 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.788238 kubelet[3041]: E0120 01:55:36.782529 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.803648 kubelet[3041]: E0120 01:55:36.795865 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.803648 kubelet[3041]: W0120 01:55:36.795893 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.803648 kubelet[3041]: E0120 01:55:36.795920 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.808578 kubelet[3041]: E0120 01:55:36.807607 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.808578 kubelet[3041]: W0120 01:55:36.807655 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.808578 kubelet[3041]: E0120 01:55:36.807684 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.814774 kubelet[3041]: E0120 01:55:36.812956 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.814774 kubelet[3041]: W0120 01:55:36.813066 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.814774 kubelet[3041]: E0120 01:55:36.813097 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.826773 kubelet[3041]: E0120 01:55:36.823840 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.826773 kubelet[3041]: W0120 01:55:36.823886 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.826773 kubelet[3041]: E0120 01:55:36.823913 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.829705 kubelet[3041]: E0120 01:55:36.828578 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.829705 kubelet[3041]: W0120 01:55:36.828619 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.829705 kubelet[3041]: E0120 01:55:36.828645 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.831261 kubelet[3041]: E0120 01:55:36.831241 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.831528 kubelet[3041]: W0120 01:55:36.831508 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.831700 kubelet[3041]: E0120 01:55:36.831682 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.840491 kubelet[3041]: E0120 01:55:36.840454 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.840709 kubelet[3041]: W0120 01:55:36.840684 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.840834 kubelet[3041]: E0120 01:55:36.840816 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:36.860949 kubelet[3041]: E0120 01:55:36.860817 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:36.860949 kubelet[3041]: W0120 01:55:36.860858 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:36.860949 kubelet[3041]: E0120 01:55:36.860889 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:37.342149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089877666.mount: Deactivated successfully. Jan 20 01:55:38.531908 kubelet[3041]: E0120 01:55:38.527987 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:39.425425 kubelet[3041]: E0120 01:55:39.425087 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:55:40.527874 kubelet[3041]: E0120 01:55:40.527299 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:42.528805 kubelet[3041]: E0120 01:55:42.528745 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:44.430477 kubelet[3041]: E0120 01:55:44.430423 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:55:44.531060 kubelet[3041]: E0120 01:55:44.526768 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:46.527924 kubelet[3041]: E0120 01:55:46.527596 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:47.256168 containerd[1641]: time="2026-01-20T01:55:47.256107132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:55:47.265399 containerd[1641]: time="2026-01-20T01:55:47.265292244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Jan 20 01:55:47.271938 containerd[1641]: time="2026-01-20T01:55:47.271614528Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:55:47.321865 containerd[1641]: time="2026-01-20T01:55:47.312405392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:55:47.343276 containerd[1641]: time="2026-01-20T01:55:47.343089701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 12.129494094s" Jan 20 01:55:47.343276 containerd[1641]: time="2026-01-20T01:55:47.343185240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 01:55:47.362825 containerd[1641]: time="2026-01-20T01:55:47.361603072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:55:47.479324 containerd[1641]: time="2026-01-20T01:55:47.478663398Z" level=info msg="CreateContainer within sandbox \"f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:55:47.542831 containerd[1641]: time="2026-01-20T01:55:47.536817680Z" level=info msg="Container 5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:55:47.638005 containerd[1641]: time="2026-01-20T01:55:47.636799955Z" level=info msg="CreateContainer within sandbox \"f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771\"" Jan 20 01:55:47.651077 containerd[1641]: time="2026-01-20T01:55:47.650948784Z" level=info msg="StartContainer for \"5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771\"" Jan 20 01:55:47.659404 containerd[1641]: time="2026-01-20T01:55:47.658308452Z" level=info msg="connecting to shim 5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771" address="unix:///run/containerd/s/3e76a89a30b191993faf0e42cb384ac2eada6cb3adfcfe7e3a07877e74f05e8c" protocol=ttrpc version=3 Jan 20 01:55:47.883886 systemd[1]: Started cri-containerd-5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771.scope - libcontainer container 5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771. Jan 20 01:55:48.043041 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 20 01:55:48.043177 kernel: audit: type=1334 audit(1768874148.035:569): prog-id=161 op=LOAD Jan 20 01:55:48.035000 audit: BPF prog-id=161 op=LOAD Jan 20 01:55:48.038000 audit: BPF prog-id=162 op=LOAD Jan 20 01:55:48.077586 kernel: audit: type=1334 audit(1768874148.038:570): prog-id=162 op=LOAD Jan 20 01:55:48.090602 kernel: audit: type=1300 audit(1768874148.038:570): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000da238 a2=98 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.038000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000da238 a2=98 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.134106 kernel: audit: type=1327 audit(1768874148.038:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.038000 audit: BPF prog-id=162 op=UNLOAD Jan 20 01:55:48.199079 kernel: audit: type=1334 audit(1768874148.038:571): prog-id=162 op=UNLOAD Jan 20 01:55:48.199263 kernel: audit: type=1300 audit(1768874148.038:571): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.038000 audit[3714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.322844 kernel: audit: type=1327 audit(1768874148.038:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.341854 kernel: audit: type=1334 audit(1768874148.038:572): prog-id=163 op=LOAD Jan 20 01:55:48.038000 audit: BPF prog-id=163 op=LOAD Jan 20 01:55:48.038000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000da488 a2=98 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.430446 kernel: audit: type=1300 audit(1768874148.038:572): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000da488 a2=98 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.430674 kernel: audit: type=1327 audit(1768874148.038:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.038000 audit: BPF prog-id=164 op=LOAD Jan 20 01:55:48.038000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0000da218 a2=98 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.038000 audit: BPF prog-id=164 op=UNLOAD Jan 20 01:55:48.038000 audit[3714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.038000 audit: BPF prog-id=163 op=UNLOAD Jan 20 01:55:48.038000 audit[3714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.038000 audit: BPF prog-id=165 op=LOAD Jan 20 01:55:48.038000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0000da6e8 a2=98 a3=0 items=0 ppid=3508 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:48.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565343564336564346139346538363233316135633236356363336534 Jan 20 01:55:48.529056 kubelet[3041]: E0120 01:55:48.528715 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:48.616772 containerd[1641]: time="2026-01-20T01:55:48.608696375Z" level=info msg="StartContainer for \"5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771\" returns successfully" Jan 20 01:55:49.075698 kubelet[3041]: E0120 01:55:49.075399 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:49.132428 kubelet[3041]: E0120 01:55:49.131666 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.132428 kubelet[3041]: W0120 01:55:49.131706 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.132428 kubelet[3041]: E0120 01:55:49.131735 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.136269 kubelet[3041]: E0120 01:55:49.136122 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.136269 kubelet[3041]: W0120 01:55:49.136144 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.136269 kubelet[3041]: E0120 01:55:49.136166 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.151412 kubelet[3041]: E0120 01:55:49.151205 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.151412 kubelet[3041]: W0120 01:55:49.151242 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.151412 kubelet[3041]: E0120 01:55:49.151271 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.164099 kubelet[3041]: E0120 01:55:49.163920 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.164099 kubelet[3041]: W0120 01:55:49.163950 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.164099 kubelet[3041]: E0120 01:55:49.163983 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.173314 kubelet[3041]: E0120 01:55:49.173168 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.173314 kubelet[3041]: W0120 01:55:49.173202 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.173314 kubelet[3041]: E0120 01:55:49.173228 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.178584 kubelet[3041]: E0120 01:55:49.178450 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.178584 kubelet[3041]: W0120 01:55:49.178477 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.178584 kubelet[3041]: E0120 01:55:49.178501 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.182633 kubelet[3041]: E0120 01:55:49.182502 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.182633 kubelet[3041]: W0120 01:55:49.182525 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.182633 kubelet[3041]: E0120 01:55:49.182553 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.190651 kubelet[3041]: E0120 01:55:49.190442 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.190651 kubelet[3041]: W0120 01:55:49.190472 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.190651 kubelet[3041]: E0120 01:55:49.190499 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.199098 kubelet[3041]: E0120 01:55:49.199065 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.200389 kubelet[3041]: W0120 01:55:49.200121 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.200389 kubelet[3041]: E0120 01:55:49.200158 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.211288 kubelet[3041]: E0120 01:55:49.211249 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.211646 kubelet[3041]: W0120 01:55:49.211520 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.211646 kubelet[3041]: E0120 01:55:49.211557 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.219485 kubelet[3041]: E0120 01:55:49.219456 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.221172 kubelet[3041]: W0120 01:55:49.219646 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.221172 kubelet[3041]: E0120 01:55:49.219681 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.230002 kubelet[3041]: E0120 01:55:49.229973 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.233846 kubelet[3041]: W0120 01:55:49.230096 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.233846 kubelet[3041]: E0120 01:55:49.230132 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.262196 kubelet[3041]: E0120 01:55:49.245736 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.262196 kubelet[3041]: W0120 01:55:49.245840 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.262196 kubelet[3041]: E0120 01:55:49.245873 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.268251 kubelet[3041]: E0120 01:55:49.266659 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.268251 kubelet[3041]: W0120 01:55:49.266693 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.268251 kubelet[3041]: E0120 01:55:49.266722 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.279984 kubelet[3041]: E0120 01:55:49.272967 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.279984 kubelet[3041]: W0120 01:55:49.273017 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.279984 kubelet[3041]: E0120 01:55:49.273048 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.284605 kubelet[3041]: E0120 01:55:49.281880 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.284605 kubelet[3041]: W0120 01:55:49.281954 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.284605 kubelet[3041]: E0120 01:55:49.281982 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.302866 kubelet[3041]: E0120 01:55:49.302091 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.302866 kubelet[3041]: W0120 01:55:49.302126 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.302866 kubelet[3041]: E0120 01:55:49.302156 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.305659 kubelet[3041]: E0120 01:55:49.305175 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.305659 kubelet[3041]: W0120 01:55:49.305198 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.305659 kubelet[3041]: E0120 01:55:49.305221 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.315981 kubelet[3041]: E0120 01:55:49.315841 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.322853 kubelet[3041]: W0120 01:55:49.315878 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.322853 kubelet[3041]: E0120 01:55:49.316267 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.322853 kubelet[3041]: E0120 01:55:49.321052 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.322853 kubelet[3041]: W0120 01:55:49.321075 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.322853 kubelet[3041]: E0120 01:55:49.321103 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.322853 kubelet[3041]: E0120 01:55:49.321526 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.322853 kubelet[3041]: W0120 01:55:49.321540 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.322853 kubelet[3041]: E0120 01:55:49.321556 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.348677 kubelet[3041]: E0120 01:55:49.333740 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.348677 kubelet[3041]: W0120 01:55:49.333822 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.348677 kubelet[3041]: E0120 01:55:49.333854 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.348677 kubelet[3041]: E0120 01:55:49.334210 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.348677 kubelet[3041]: W0120 01:55:49.334225 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.348677 kubelet[3041]: E0120 01:55:49.334242 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.348677 kubelet[3041]: E0120 01:55:49.334567 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.348677 kubelet[3041]: W0120 01:55:49.334580 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.348677 kubelet[3041]: E0120 01:55:49.334595 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.363565 kubelet[3041]: E0120 01:55:49.351168 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.363565 kubelet[3041]: W0120 01:55:49.351201 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.363565 kubelet[3041]: E0120 01:55:49.351230 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.363565 kubelet[3041]: E0120 01:55:49.356073 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.363565 kubelet[3041]: W0120 01:55:49.356094 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.363565 kubelet[3041]: E0120 01:55:49.356116 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.374640 kubelet[3041]: E0120 01:55:49.365946 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.374640 kubelet[3041]: W0120 01:55:49.366001 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.374640 kubelet[3041]: E0120 01:55:49.366026 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.374640 kubelet[3041]: E0120 01:55:49.366410 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.374640 kubelet[3041]: W0120 01:55:49.366423 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.374640 kubelet[3041]: E0120 01:55:49.366442 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.374640 kubelet[3041]: E0120 01:55:49.366710 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.374640 kubelet[3041]: W0120 01:55:49.366722 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.374640 kubelet[3041]: E0120 01:55:49.366738 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.374640 kubelet[3041]: E0120 01:55:49.367021 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.378470 kubelet[3041]: W0120 01:55:49.367032 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.378470 kubelet[3041]: E0120 01:55:49.367047 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.378470 kubelet[3041]: E0120 01:55:49.367282 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.378470 kubelet[3041]: W0120 01:55:49.367296 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.378470 kubelet[3041]: E0120 01:55:49.367308 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.378470 kubelet[3041]: E0120 01:55:49.367622 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.378470 kubelet[3041]: W0120 01:55:49.367633 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.378470 kubelet[3041]: E0120 01:55:49.367644 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.428601 kubelet[3041]: E0120 01:55:49.384645 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:49.428601 kubelet[3041]: W0120 01:55:49.384702 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:49.428601 kubelet[3041]: E0120 01:55:49.384731 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:49.438907 kubelet[3041]: I0120 01:55:49.432725 3041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b6465767c-mj5np" podStartSLOduration=5.284287171 podStartE2EDuration="17.432704486s" podCreationTimestamp="2026-01-20 01:55:32 +0000 UTC" firstStartedPulling="2026-01-20 01:55:35.210009 +0000 UTC m=+121.214697896" lastFinishedPulling="2026-01-20 01:55:47.358426314 +0000 UTC m=+133.363115211" observedRunningTime="2026-01-20 01:55:49.432466529 +0000 UTC m=+135.437155445" watchObservedRunningTime="2026-01-20 01:55:49.432704486 +0000 UTC m=+135.437393382" Jan 20 01:55:49.481461 kubelet[3041]: E0120 01:55:49.481292 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:55:50.112498 kubelet[3041]: E0120 01:55:50.112456 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:50.299976 kubelet[3041]: E0120 01:55:50.291451 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.299976 kubelet[3041]: W0120 01:55:50.291511 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.299976 kubelet[3041]: E0120 01:55:50.291542 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.299976 kubelet[3041]: E0120 01:55:50.292739 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.299976 kubelet[3041]: W0120 01:55:50.292755 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.299976 kubelet[3041]: E0120 01:55:50.292824 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.312053 kubelet[3041]: E0120 01:55:50.306244 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.312053 kubelet[3041]: W0120 01:55:50.306308 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.312053 kubelet[3041]: E0120 01:55:50.306400 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.312053 kubelet[3041]: E0120 01:55:50.307484 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.312053 kubelet[3041]: W0120 01:55:50.307498 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.312053 kubelet[3041]: E0120 01:55:50.307516 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.325499 kubelet[3041]: E0120 01:55:50.325149 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.325499 kubelet[3041]: W0120 01:55:50.325190 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.325499 kubelet[3041]: E0120 01:55:50.325217 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.354593 kubelet[3041]: E0120 01:55:50.349579 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.354593 kubelet[3041]: W0120 01:55:50.349615 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.354593 kubelet[3041]: E0120 01:55:50.349645 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.389620 kubelet[3041]: E0120 01:55:50.389457 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.389921 kubelet[3041]: W0120 01:55:50.389892 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.390036 kubelet[3041]: E0120 01:55:50.390017 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.417949 kubelet[3041]: E0120 01:55:50.412739 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.418207 kubelet[3041]: W0120 01:55:50.418175 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.418425 kubelet[3041]: E0120 01:55:50.418402 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.469412 kubelet[3041]: E0120 01:55:50.462680 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.469412 kubelet[3041]: W0120 01:55:50.462719 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.469412 kubelet[3041]: E0120 01:55:50.462752 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.529670 kubelet[3041]: E0120 01:55:50.529614 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:50.579890 kubelet[3041]: E0120 01:55:50.542145 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.579890 kubelet[3041]: W0120 01:55:50.542183 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.579890 kubelet[3041]: E0120 01:55:50.542210 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.579890 kubelet[3041]: E0120 01:55:50.542627 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.579890 kubelet[3041]: W0120 01:55:50.542642 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.579890 kubelet[3041]: E0120 01:55:50.542658 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.579890 kubelet[3041]: E0120 01:55:50.546097 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.579890 kubelet[3041]: W0120 01:55:50.546115 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.579890 kubelet[3041]: E0120 01:55:50.546132 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.579890 kubelet[3041]: E0120 01:55:50.553257 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.580516 kubelet[3041]: W0120 01:55:50.553281 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.580516 kubelet[3041]: E0120 01:55:50.553305 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.602528 kubelet[3041]: E0120 01:55:50.601259 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.602528 kubelet[3041]: W0120 01:55:50.601320 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.602528 kubelet[3041]: E0120 01:55:50.601416 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.620056 kubelet[3041]: E0120 01:55:50.619640 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.620056 kubelet[3041]: W0120 01:55:50.619702 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.620056 kubelet[3041]: E0120 01:55:50.619733 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.643543 kubelet[3041]: E0120 01:55:50.643294 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.643543 kubelet[3041]: W0120 01:55:50.643410 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.643543 kubelet[3041]: E0120 01:55:50.643438 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.665073 kubelet[3041]: E0120 01:55:50.655153 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.665073 kubelet[3041]: W0120 01:55:50.655191 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.665073 kubelet[3041]: E0120 01:55:50.655220 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.665073 kubelet[3041]: E0120 01:55:50.664126 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.665073 kubelet[3041]: W0120 01:55:50.664148 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.665073 kubelet[3041]: E0120 01:55:50.664176 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.685087 kubelet[3041]: E0120 01:55:50.685034 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.685087 kubelet[3041]: W0120 01:55:50.685078 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.685555 kubelet[3041]: E0120 01:55:50.685116 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.705579 kubelet[3041]: E0120 01:55:50.705472 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.705579 kubelet[3041]: W0120 01:55:50.705540 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.705579 kubelet[3041]: E0120 01:55:50.705573 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.713489 kubelet[3041]: E0120 01:55:50.713289 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.713489 kubelet[3041]: W0120 01:55:50.713457 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.713684 kubelet[3041]: E0120 01:55:50.713488 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.767685 kubelet[3041]: E0120 01:55:50.752055 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.767685 kubelet[3041]: W0120 01:55:50.752094 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.767685 kubelet[3041]: E0120 01:55:50.752124 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.767685 kubelet[3041]: E0120 01:55:50.752650 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.767685 kubelet[3041]: W0120 01:55:50.752667 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.767685 kubelet[3041]: E0120 01:55:50.752688 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.785023 kubelet[3041]: E0120 01:55:50.784679 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.785023 kubelet[3041]: W0120 01:55:50.784759 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.785023 kubelet[3041]: E0120 01:55:50.784821 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.878004 kubelet[3041]: E0120 01:55:50.877953 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.878412 kubelet[3041]: W0120 01:55:50.878246 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.883437 kubelet[3041]: E0120 01:55:50.878625 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.896222 kubelet[3041]: E0120 01:55:50.892087 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.896222 kubelet[3041]: W0120 01:55:50.892126 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.896222 kubelet[3041]: E0120 01:55:50.892158 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.911161 kubelet[3041]: E0120 01:55:50.905524 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.911161 kubelet[3041]: W0120 01:55:50.906967 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.911161 kubelet[3041]: E0120 01:55:50.907009 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.912478 kubelet[3041]: E0120 01:55:50.912210 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.912765 kubelet[3041]: W0120 01:55:50.912695 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.920238 kubelet[3041]: E0120 01:55:50.915087 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.927412 kubelet[3041]: E0120 01:55:50.922590 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.927412 kubelet[3041]: W0120 01:55:50.922620 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.927412 kubelet[3041]: E0120 01:55:50.922646 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.937906 kubelet[3041]: E0120 01:55:50.932862 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.937906 kubelet[3041]: W0120 01:55:50.932903 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.937906 kubelet[3041]: E0120 01:55:50.932934 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.969430 kubelet[3041]: E0120 01:55:50.949272 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.969430 kubelet[3041]: W0120 01:55:50.949397 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.969430 kubelet[3041]: E0120 01:55:50.949431 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.969430 kubelet[3041]: E0120 01:55:50.951037 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.969430 kubelet[3041]: W0120 01:55:50.951053 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.969430 kubelet[3041]: E0120 01:55:50.951074 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:50.978992 kubelet[3041]: E0120 01:55:50.977557 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:50.978992 kubelet[3041]: W0120 01:55:50.977671 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:50.978992 kubelet[3041]: E0120 01:55:50.977804 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.179574 kubelet[3041]: E0120 01:55:51.178618 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:51.293050 kubelet[3041]: E0120 01:55:51.291394 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.293050 kubelet[3041]: W0120 01:55:51.291443 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.293050 kubelet[3041]: E0120 01:55:51.291475 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.296706 kubelet[3041]: E0120 01:55:51.294478 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.296706 kubelet[3041]: W0120 01:55:51.294581 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.296706 kubelet[3041]: E0120 01:55:51.294602 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.310706 kubelet[3041]: E0120 01:55:51.310264 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.310706 kubelet[3041]: W0120 01:55:51.310323 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.310706 kubelet[3041]: E0120 01:55:51.310414 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.313493 kubelet[3041]: E0120 01:55:51.312515 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.313493 kubelet[3041]: W0120 01:55:51.312530 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.313493 kubelet[3041]: E0120 01:55:51.312550 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.317619 kubelet[3041]: E0120 01:55:51.317306 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.317619 kubelet[3041]: W0120 01:55:51.317325 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.317619 kubelet[3041]: E0120 01:55:51.317407 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.318822 kubelet[3041]: E0120 01:55:51.318248 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.318822 kubelet[3041]: W0120 01:55:51.318261 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.318822 kubelet[3041]: E0120 01:55:51.318278 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.320628 kubelet[3041]: E0120 01:55:51.319917 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.320628 kubelet[3041]: W0120 01:55:51.319933 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.320628 kubelet[3041]: E0120 01:55:51.319949 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.325205 kubelet[3041]: E0120 01:55:51.324424 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.325205 kubelet[3041]: W0120 01:55:51.324442 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.325205 kubelet[3041]: E0120 01:55:51.324463 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.330513 kubelet[3041]: E0120 01:55:51.327552 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.330513 kubelet[3041]: W0120 01:55:51.327574 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.330513 kubelet[3041]: E0120 01:55:51.327596 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.330513 kubelet[3041]: E0120 01:55:51.330193 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.330513 kubelet[3041]: W0120 01:55:51.330206 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.330513 kubelet[3041]: E0120 01:55:51.330222 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.331815 kubelet[3041]: E0120 01:55:51.331543 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.331815 kubelet[3041]: W0120 01:55:51.331560 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.331815 kubelet[3041]: E0120 01:55:51.331575 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.334812 kubelet[3041]: E0120 01:55:51.334035 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.334812 kubelet[3041]: W0120 01:55:51.334049 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.334812 kubelet[3041]: E0120 01:55:51.334064 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.338483 kubelet[3041]: E0120 01:55:51.338027 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.338483 kubelet[3041]: W0120 01:55:51.338043 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.338483 kubelet[3041]: E0120 01:55:51.338060 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.340547 kubelet[3041]: E0120 01:55:51.339733 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.340547 kubelet[3041]: W0120 01:55:51.339842 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.340547 kubelet[3041]: E0120 01:55:51.339859 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.345567 kubelet[3041]: E0120 01:55:51.342768 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.345567 kubelet[3041]: W0120 01:55:51.342923 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.345567 kubelet[3041]: E0120 01:55:51.343025 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.357009 kubelet[3041]: E0120 01:55:51.353680 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.357009 kubelet[3041]: W0120 01:55:51.353713 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.357009 kubelet[3041]: E0120 01:55:51.353740 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.361693 kubelet[3041]: E0120 01:55:51.361669 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.362200 kubelet[3041]: W0120 01:55:51.362177 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.362289 kubelet[3041]: E0120 01:55:51.362275 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.368754 kubelet[3041]: E0120 01:55:51.368730 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.372047 kubelet[3041]: W0120 01:55:51.372022 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.372143 kubelet[3041]: E0120 01:55:51.372125 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.382122 kubelet[3041]: E0120 01:55:51.381728 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.382122 kubelet[3041]: W0120 01:55:51.381762 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.382122 kubelet[3041]: E0120 01:55:51.381834 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.386244 kubelet[3041]: E0120 01:55:51.386212 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.386456 kubelet[3041]: W0120 01:55:51.386431 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.386560 kubelet[3041]: E0120 01:55:51.386539 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.392062 kubelet[3041]: E0120 01:55:51.391328 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.392062 kubelet[3041]: W0120 01:55:51.391484 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.392062 kubelet[3041]: E0120 01:55:51.391513 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.392397 kubelet[3041]: E0120 01:55:51.392123 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.392397 kubelet[3041]: W0120 01:55:51.392135 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.392397 kubelet[3041]: E0120 01:55:51.392153 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.393205 kubelet[3041]: E0120 01:55:51.393016 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.393205 kubelet[3041]: W0120 01:55:51.393035 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.393205 kubelet[3041]: E0120 01:55:51.393050 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.395291 kubelet[3041]: E0120 01:55:51.395271 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.395451 kubelet[3041]: W0120 01:55:51.395434 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.395548 kubelet[3041]: E0120 01:55:51.395530 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.399961 kubelet[3041]: E0120 01:55:51.397164 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.399961 kubelet[3041]: W0120 01:55:51.397182 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.399961 kubelet[3041]: E0120 01:55:51.397197 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.402180 kubelet[3041]: E0120 01:55:51.400963 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.402180 kubelet[3041]: W0120 01:55:51.401016 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.402180 kubelet[3041]: E0120 01:55:51.401034 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.404275 kubelet[3041]: E0120 01:55:51.403486 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.404275 kubelet[3041]: W0120 01:55:51.403532 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.404275 kubelet[3041]: E0120 01:55:51.403557 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.406098 kubelet[3041]: E0120 01:55:51.405013 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.406098 kubelet[3041]: W0120 01:55:51.405026 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.406098 kubelet[3041]: E0120 01:55:51.405044 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.412722 kubelet[3041]: E0120 01:55:51.406892 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.412722 kubelet[3041]: W0120 01:55:51.410223 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.412722 kubelet[3041]: E0120 01:55:51.410249 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.417223 kubelet[3041]: E0120 01:55:51.413998 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.417223 kubelet[3041]: W0120 01:55:51.414016 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.417223 kubelet[3041]: E0120 01:55:51.414035 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.420868 kubelet[3041]: E0120 01:55:51.418617 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.420868 kubelet[3041]: W0120 01:55:51.418638 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.420868 kubelet[3041]: E0120 01:55:51.418656 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.422939 kubelet[3041]: E0120 01:55:51.422571 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.422939 kubelet[3041]: W0120 01:55:51.422613 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.422939 kubelet[3041]: E0120 01:55:51.422632 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.423986 kubelet[3041]: E0120 01:55:51.423646 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:51.423986 kubelet[3041]: W0120 01:55:51.423659 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:51.423986 kubelet[3041]: E0120 01:55:51.423675 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:51.427528 containerd[1641]: time="2026-01-20T01:55:51.427484297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:55:51.452574 containerd[1641]: time="2026-01-20T01:55:51.433399937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4445096" Jan 20 01:55:51.452999 containerd[1641]: time="2026-01-20T01:55:51.452952923Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:55:51.476696 containerd[1641]: time="2026-01-20T01:55:51.476642383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:55:51.491303 containerd[1641]: time="2026-01-20T01:55:51.479237756Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 4.11755804s" Jan 20 01:55:51.491303 containerd[1641]: time="2026-01-20T01:55:51.490478965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 01:55:51.520908 containerd[1641]: time="2026-01-20T01:55:51.519418721Z" level=info msg="CreateContainer within sandbox \"311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:55:51.611090 containerd[1641]: time="2026-01-20T01:55:51.607058717Z" level=info msg="Container 5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:55:51.670638 containerd[1641]: time="2026-01-20T01:55:51.670167770Z" level=info msg="CreateContainer within sandbox \"311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5\"" Jan 20 01:55:51.671559 containerd[1641]: time="2026-01-20T01:55:51.671496487Z" level=info msg="StartContainer for \"5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5\"" Jan 20 01:55:51.706441 containerd[1641]: time="2026-01-20T01:55:51.706195539Z" level=info msg="connecting to shim 5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5" address="unix:///run/containerd/s/f1ce25e1af03caa5247a82fb35c320afb52c8a8db9cfefbf21616e96e20aec4d" protocol=ttrpc version=3 Jan 20 01:55:52.015977 systemd[1]: Started cri-containerd-5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5.scope - libcontainer container 5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5. Jan 20 01:55:52.218884 kubelet[3041]: E0120 01:55:52.217440 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:52.305269 kubelet[3041]: E0120 01:55:52.304752 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.308941 kubelet[3041]: W0120 01:55:52.305856 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.308941 kubelet[3041]: E0120 01:55:52.305898 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.314267 kubelet[3041]: E0120 01:55:52.310024 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.314267 kubelet[3041]: W0120 01:55:52.310131 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.314267 kubelet[3041]: E0120 01:55:52.310248 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.324824 kubelet[3041]: E0120 01:55:52.320930 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.324824 kubelet[3041]: W0120 01:55:52.320988 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.324824 kubelet[3041]: E0120 01:55:52.321020 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.329069 kubelet[3041]: E0120 01:55:52.328050 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.329069 kubelet[3041]: W0120 01:55:52.328073 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.329069 kubelet[3041]: E0120 01:55:52.328183 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.332444 kubelet[3041]: E0120 01:55:52.330600 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.332444 kubelet[3041]: W0120 01:55:52.331562 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.333276 kubelet[3041]: E0120 01:55:52.332741 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.334614 kubelet[3041]: E0120 01:55:52.334229 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.334614 kubelet[3041]: W0120 01:55:52.334270 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.334614 kubelet[3041]: E0120 01:55:52.334289 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.338822 kubelet[3041]: E0120 01:55:52.338382 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.338822 kubelet[3041]: W0120 01:55:52.338402 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.338822 kubelet[3041]: E0120 01:55:52.338423 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.343135 kubelet[3041]: E0120 01:55:52.342135 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.343135 kubelet[3041]: W0120 01:55:52.342186 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.343135 kubelet[3041]: E0120 01:55:52.342208 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.345255 kubelet[3041]: E0120 01:55:52.343665 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.345255 kubelet[3041]: W0120 01:55:52.343709 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.345255 kubelet[3041]: E0120 01:55:52.343730 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.348124 kubelet[3041]: E0120 01:55:52.346961 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.348124 kubelet[3041]: W0120 01:55:52.347003 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.348124 kubelet[3041]: E0120 01:55:52.347022 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.350434 kubelet[3041]: E0120 01:55:52.349145 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.350434 kubelet[3041]: W0120 01:55:52.349188 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.350434 kubelet[3041]: E0120 01:55:52.349207 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.352247 kubelet[3041]: E0120 01:55:52.351456 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.352247 kubelet[3041]: W0120 01:55:52.351496 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.352247 kubelet[3041]: E0120 01:55:52.351515 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.360524 kubelet[3041]: E0120 01:55:52.358637 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.361668 kubelet[3041]: W0120 01:55:52.360829 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.361668 kubelet[3041]: E0120 01:55:52.361176 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.366218 kubelet[3041]: E0120 01:55:52.365102 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.366218 kubelet[3041]: W0120 01:55:52.365247 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.366218 kubelet[3041]: E0120 01:55:52.365271 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.381113 kubelet[3041]: E0120 01:55:52.370849 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.381113 kubelet[3041]: W0120 01:55:52.370869 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.381113 kubelet[3041]: E0120 01:55:52.370888 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.383678 kubelet[3041]: E0120 01:55:52.383038 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.383678 kubelet[3041]: W0120 01:55:52.383094 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.383678 kubelet[3041]: E0120 01:55:52.383125 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.403982 kubelet[3041]: E0120 01:55:52.400488 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.403982 kubelet[3041]: W0120 01:55:52.400551 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.403982 kubelet[3041]: E0120 01:55:52.400585 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.420186 kubelet[3041]: E0120 01:55:52.413402 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.420186 kubelet[3041]: W0120 01:55:52.413457 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.420186 kubelet[3041]: E0120 01:55:52.413488 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.425043 kubelet[3041]: E0120 01:55:52.421171 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.425043 kubelet[3041]: W0120 01:55:52.421219 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.425043 kubelet[3041]: E0120 01:55:52.421247 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.432934 kubelet[3041]: E0120 01:55:52.431059 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.432934 kubelet[3041]: W0120 01:55:52.431108 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.432934 kubelet[3041]: E0120 01:55:52.431140 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.436766 kubelet[3041]: E0120 01:55:52.434889 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.436766 kubelet[3041]: W0120 01:55:52.434908 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.436766 kubelet[3041]: E0120 01:55:52.434927 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.442041 kubelet[3041]: E0120 01:55:52.437640 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.442041 kubelet[3041]: W0120 01:55:52.437654 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.442041 kubelet[3041]: E0120 01:55:52.437674 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.442041 kubelet[3041]: E0120 01:55:52.441697 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.442041 kubelet[3041]: W0120 01:55:52.441715 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.442041 kubelet[3041]: E0120 01:55:52.441734 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.447774 kubelet[3041]: E0120 01:55:52.446910 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.447774 kubelet[3041]: W0120 01:55:52.446959 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.447774 kubelet[3041]: E0120 01:55:52.446981 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.451093 kubelet[3041]: E0120 01:55:52.449008 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.451093 kubelet[3041]: W0120 01:55:52.449045 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.451093 kubelet[3041]: E0120 01:55:52.449065 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.457942 kubelet[3041]: E0120 01:55:52.451888 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.457942 kubelet[3041]: W0120 01:55:52.451902 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.457942 kubelet[3041]: E0120 01:55:52.451919 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.460928 kubelet[3041]: E0120 01:55:52.460486 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.460928 kubelet[3041]: W0120 01:55:52.460526 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.460928 kubelet[3041]: E0120 01:55:52.460543 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.467501 kubelet[3041]: E0120 01:55:52.462158 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.467501 kubelet[3041]: W0120 01:55:52.462170 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.467501 kubelet[3041]: E0120 01:55:52.462186 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.467501 kubelet[3041]: E0120 01:55:52.463695 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.467501 kubelet[3041]: W0120 01:55:52.463708 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.467501 kubelet[3041]: E0120 01:55:52.463724 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.476054 kubelet[3041]: E0120 01:55:52.471131 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.476054 kubelet[3041]: W0120 01:55:52.471173 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.476054 kubelet[3041]: E0120 01:55:52.471220 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.476054 kubelet[3041]: E0120 01:55:52.472696 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.476054 kubelet[3041]: W0120 01:55:52.472708 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.476054 kubelet[3041]: E0120 01:55:52.472722 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.481402 kubelet[3041]: E0120 01:55:52.479688 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.481402 kubelet[3041]: W0120 01:55:52.479705 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.481402 kubelet[3041]: E0120 01:55:52.479758 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.500152 kubelet[3041]: E0120 01:55:52.499540 3041 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:55:52.500152 kubelet[3041]: W0120 01:55:52.499599 3041 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:55:52.500152 kubelet[3041]: E0120 01:55:52.499630 3041 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:55:52.536434 kubelet[3041]: E0120 01:55:52.536317 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:52.627000 audit[3913]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:52.627000 audit[3913]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe88a82620 a2=0 a3=7ffe88a8260c items=0 ppid=3158 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:52.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:52.684000 audit[3913]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3913 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:55:52.684000 audit[3913]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe88a82620 a2=0 a3=7ffe88a8260c items=0 ppid=3158 pid=3913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:52.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:55:52.720000 audit: BPF prog-id=166 op=LOAD Jan 20 01:55:52.720000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3606 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:52.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336631323962393039366534666664313030396539623865663864 Jan 20 01:55:52.720000 audit: BPF prog-id=167 op=LOAD Jan 20 01:55:52.720000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3606 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:52.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336631323962393039366534666664313030396539623865663864 Jan 20 01:55:52.720000 audit: BPF prog-id=167 op=UNLOAD Jan 20 01:55:52.720000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:52.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336631323962393039366534666664313030396539623865663864 Jan 20 01:55:52.720000 audit: BPF prog-id=166 op=UNLOAD Jan 20 01:55:52.720000 audit[3858]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:52.720000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336631323962393039366534666664313030396539623865663864 Jan 20 01:55:52.739000 audit: BPF prog-id=168 op=LOAD Jan 20 01:55:52.739000 audit[3858]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3606 pid=3858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:55:52.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336631323962393039366534666664313030396539623865663864 Jan 20 01:55:53.139990 containerd[1641]: time="2026-01-20T01:55:53.139877039Z" level=info msg="StartContainer for \"5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5\" returns successfully" Jan 20 01:55:53.239160 systemd[1]: cri-containerd-5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5.scope: Deactivated successfully. Jan 20 01:55:53.279000 audit: BPF prog-id=168 op=UNLOAD Jan 20 01:55:53.306217 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 20 01:55:53.306437 kernel: audit: type=1334 audit(1768874153.279:584): prog-id=168 op=UNLOAD Jan 20 01:55:53.314415 kubelet[3041]: E0120 01:55:53.314302 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:53.375484 containerd[1641]: time="2026-01-20T01:55:53.375428195Z" level=info msg="received container exit event container_id:\"5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5\" id:\"5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5\" pid:3871 exited_at:{seconds:1768874153 nanos:346287978}" Jan 20 01:55:53.649864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5-rootfs.mount: Deactivated successfully. Jan 20 01:55:54.398989 kubelet[3041]: E0120 01:55:54.392461 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:54.409060 containerd[1641]: time="2026-01-20T01:55:54.405401670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:55:54.491838 kubelet[3041]: E0120 01:55:54.486171 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:55:54.527108 kubelet[3041]: E0120 01:55:54.526267 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:56.527835 kubelet[3041]: E0120 01:55:56.527637 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:57.539143 kubelet[3041]: E0120 01:55:57.536320 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:55:58.533112 kubelet[3041]: E0120 01:55:58.530558 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:55:59.490749 kubelet[3041]: E0120 01:55:59.490027 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:56:00.529643 kubelet[3041]: E0120 01:56:00.526323 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:02.529733 kubelet[3041]: E0120 01:56:02.529674 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:04.493673 kubelet[3041]: E0120 01:56:04.493621 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:56:04.527749 kubelet[3041]: E0120 01:56:04.527686 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:05.541400 kubelet[3041]: E0120 01:56:05.541267 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:06.538803 kubelet[3041]: E0120 01:56:06.536825 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:08.532713 kubelet[3041]: E0120 01:56:08.530784 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:09.518227 kubelet[3041]: E0120 01:56:09.517867 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:56:10.536416 kubelet[3041]: E0120 01:56:10.535676 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:12.540941 kubelet[3041]: E0120 01:56:12.540807 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:14.530051 kubelet[3041]: E0120 01:56:14.527912 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:14.539241 kubelet[3041]: E0120 01:56:14.531850 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:56:17.380617 kubelet[3041]: E0120 01:56:17.348437 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:17.380617 kubelet[3041]: E0120 01:56:17.352664 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:17.380617 kubelet[3041]: E0120 01:56:17.369899 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:18.726648 containerd[1641]: time="2026-01-20T01:56:18.722171538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:56:18.780229 containerd[1641]: time="2026-01-20T01:56:18.779655470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442323" Jan 20 01:56:18.789769 containerd[1641]: time="2026-01-20T01:56:18.789164648Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:56:18.856047 containerd[1641]: time="2026-01-20T01:56:18.846143971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:56:18.856047 containerd[1641]: time="2026-01-20T01:56:18.847280777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 24.441792284s" Jan 20 01:56:18.856047 containerd[1641]: time="2026-01-20T01:56:18.847325853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 01:56:18.977674 containerd[1641]: time="2026-01-20T01:56:18.972648286Z" level=info msg="CreateContainer within sandbox \"311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:56:19.111931 containerd[1641]: time="2026-01-20T01:56:19.111487827Z" level=info msg="Container 5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:56:19.129871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1770552147.mount: Deactivated successfully. Jan 20 01:56:19.208247 containerd[1641]: time="2026-01-20T01:56:19.208107487Z" level=info msg="CreateContainer within sandbox \"311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904\"" Jan 20 01:56:19.222399 containerd[1641]: time="2026-01-20T01:56:19.217913425Z" level=info msg="StartContainer for \"5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904\"" Jan 20 01:56:19.238404 containerd[1641]: time="2026-01-20T01:56:19.236316929Z" level=info msg="connecting to shim 5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904" address="unix:///run/containerd/s/f1ce25e1af03caa5247a82fb35c320afb52c8a8db9cfefbf21616e96e20aec4d" protocol=ttrpc version=3 Jan 20 01:56:19.401513 systemd[1]: Started cri-containerd-5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904.scope - libcontainer container 5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904. Jan 20 01:56:19.574650 kubelet[3041]: E0120 01:56:19.574496 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:19.575596 kubelet[3041]: E0120 01:56:19.575523 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:56:19.850000 audit: BPF prog-id=169 op=LOAD Jan 20 01:56:19.890408 kernel: audit: type=1334 audit(1768874179.850:585): prog-id=169 op=LOAD Jan 20 01:56:19.896463 kernel: audit: type=1300 audit(1768874179.850:585): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:19.850000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:19.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:19.942656 kernel: audit: type=1327 audit(1768874179.850:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:19.850000 audit: BPF prog-id=170 op=LOAD Jan 20 01:56:20.007045 kernel: audit: type=1334 audit(1768874179.850:586): prog-id=170 op=LOAD Jan 20 01:56:20.007186 kernel: audit: type=1300 audit(1768874179.850:586): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:19.850000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:20.059728 kernel: audit: type=1327 audit(1768874179.850:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:19.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:20.096746 kernel: audit: type=1334 audit(1768874179.850:587): prog-id=170 op=UNLOAD Jan 20 01:56:20.096900 kernel: audit: type=1300 audit(1768874179.850:587): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:19.850000 audit: BPF prog-id=170 op=UNLOAD Jan 20 01:56:19.850000 audit[3955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:20.170166 containerd[1641]: time="2026-01-20T01:56:20.164661379Z" level=info msg="StartContainer for \"5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904\" returns successfully" Jan 20 01:56:19.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:20.239578 kernel: audit: type=1327 audit(1768874179.850:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:19.850000 audit: BPF prog-id=169 op=UNLOAD Jan 20 01:56:19.850000 audit[3955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:20.259992 kernel: audit: type=1334 audit(1768874179.850:588): prog-id=169 op=UNLOAD Jan 20 01:56:19.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:19.850000 audit: BPF prog-id=171 op=LOAD Jan 20 01:56:19.850000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3606 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:56:19.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531383761313832623162616663343964623334613361373436663336 Jan 20 01:56:20.620810 kubelet[3041]: E0120 01:56:20.620278 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:21.552789 kubelet[3041]: E0120 01:56:21.548758 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:21.627250 kubelet[3041]: E0120 01:56:21.626829 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:23.536029 kubelet[3041]: E0120 01:56:23.532460 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:24.593530 kubelet[3041]: E0120 01:56:24.588202 3041 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 20 01:56:25.535970 kubelet[3041]: E0120 01:56:25.535517 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:26.041971 systemd[1]: cri-containerd-5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904.scope: Deactivated successfully. Jan 20 01:56:26.042490 systemd[1]: cri-containerd-5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904.scope: Consumed 2.047s CPU time, 178.5M memory peak, 3.3M read from disk, 171.3M written to disk. Jan 20 01:56:26.080000 audit: BPF prog-id=171 op=UNLOAD Jan 20 01:56:26.095059 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 01:56:26.095244 kernel: audit: type=1334 audit(1768874186.080:590): prog-id=171 op=UNLOAD Jan 20 01:56:26.135090 containerd[1641]: time="2026-01-20T01:56:26.131331807Z" level=info msg="received container exit event container_id:\"5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904\" id:\"5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904\" pid:3968 exited_at:{seconds:1768874186 nanos:78071081}" Jan 20 01:56:26.405611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904-rootfs.mount: Deactivated successfully. Jan 20 01:56:27.541780 kubelet[3041]: E0120 01:56:27.540532 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:27.783175 kubelet[3041]: E0120 01:56:27.780818 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:27.806136 containerd[1641]: time="2026-01-20T01:56:27.803065590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:56:29.531896 kubelet[3041]: E0120 01:56:29.531508 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:31.590801 systemd[1]: Created slice kubepods-besteffort-podac1c9092_8cef_4868_9089_0927692efc39.slice - libcontainer container kubepods-besteffort-podac1c9092_8cef_4868_9089_0927692efc39.slice. Jan 20 01:56:31.678461 containerd[1641]: time="2026-01-20T01:56:31.677768876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:32.360767 systemd[1]: Created slice kubepods-burstable-pod888a237a_dea3_4279_b3e9_e88855e903cc.slice - libcontainer container kubepods-burstable-pod888a237a_dea3_4279_b3e9_e88855e903cc.slice. Jan 20 01:56:32.432097 kubelet[3041]: I0120 01:56:32.425742 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsh9m\" (UniqueName: \"kubernetes.io/projected/888a237a-dea3-4279-b3e9-e88855e903cc-kube-api-access-dsh9m\") pod \"coredns-66bc5c9577-xgrtg\" (UID: \"888a237a-dea3-4279-b3e9-e88855e903cc\") " pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:56:32.432097 kubelet[3041]: I0120 01:56:32.425854 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/888a237a-dea3-4279-b3e9-e88855e903cc-config-volume\") pod \"coredns-66bc5c9577-xgrtg\" (UID: \"888a237a-dea3-4279-b3e9-e88855e903cc\") " pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:56:32.463647 systemd[1]: Created slice kubepods-besteffort-pode90a4c3c_c5d4_4dd1_99b0_686f2634a4b0.slice - libcontainer container kubepods-besteffort-pode90a4c3c_c5d4_4dd1_99b0_686f2634a4b0.slice. Jan 20 01:56:32.484208 systemd[1]: Created slice kubepods-burstable-podb4e4578c_79c2_452b_9829_4499e381b357.slice - libcontainer container kubepods-burstable-podb4e4578c_79c2_452b_9829_4499e381b357.slice. Jan 20 01:56:32.501750 systemd[1]: Created slice kubepods-besteffort-pod15d966be_bae7_42a3_83b7_ced10b64bcb2.slice - libcontainer container kubepods-besteffort-pod15d966be_bae7_42a3_83b7_ced10b64bcb2.slice. Jan 20 01:56:32.527417 kubelet[3041]: I0120 01:56:32.527304 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0-calico-apiserver-certs\") pod \"calico-apiserver-6576c69f97-z9lbz\" (UID: \"e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0\") " pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:56:32.527735 kubelet[3041]: I0120 01:56:32.527662 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c74sn\" (UniqueName: \"kubernetes.io/projected/b4e4578c-79c2-452b-9829-4499e381b357-kube-api-access-c74sn\") pod \"coredns-66bc5c9577-ddv57\" (UID: \"b4e4578c-79c2-452b-9829-4499e381b357\") " pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:56:32.532865 kubelet[3041]: I0120 01:56:32.532704 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjjv\" (UniqueName: \"kubernetes.io/projected/e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0-kube-api-access-cbjjv\") pod \"calico-apiserver-6576c69f97-z9lbz\" (UID: \"e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0\") " pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:56:32.532865 kubelet[3041]: I0120 01:56:32.532788 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4e4578c-79c2-452b-9829-4499e381b357-config-volume\") pod \"coredns-66bc5c9577-ddv57\" (UID: \"b4e4578c-79c2-452b-9829-4499e381b357\") " pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:56:32.561025 systemd[1]: Created slice kubepods-besteffort-podebbd5234_bb75_4e8e_91be_76d2ac5f3ae5.slice - libcontainer container kubepods-besteffort-podebbd5234_bb75_4e8e_91be_76d2ac5f3ae5.slice. Jan 20 01:56:32.586439 systemd[1]: Created slice kubepods-besteffort-pod738fa74e_ddb6_4c59_8db5_d8c8658e06b6.slice - libcontainer container kubepods-besteffort-pod738fa74e_ddb6_4c59_8db5_d8c8658e06b6.slice. Jan 20 01:56:32.634390 kubelet[3041]: I0120 01:56:32.634045 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkqr\" (UniqueName: \"kubernetes.io/projected/738fa74e-ddb6-4c59-8db5-d8c8658e06b6-kube-api-access-jlkqr\") pod \"goldmane-7c778bb748-pxsrr\" (UID: \"738fa74e-ddb6-4c59-8db5-d8c8658e06b6\") " pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:32.634390 kubelet[3041]: I0120 01:56:32.634132 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57304ae8-4142-4837-ab19-941e654eb081-calico-apiserver-certs\") pod \"calico-apiserver-776d4dc5d4-8w9l5\" (UID: \"57304ae8-4142-4837-ab19-941e654eb081\") " pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:56:32.634390 kubelet[3041]: I0120 01:56:32.634164 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zspcr\" (UniqueName: \"kubernetes.io/projected/57304ae8-4142-4837-ab19-941e654eb081-kube-api-access-zspcr\") pod \"calico-apiserver-776d4dc5d4-8w9l5\" (UID: \"57304ae8-4142-4837-ab19-941e654eb081\") " pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:56:32.634390 kubelet[3041]: I0120 01:56:32.634203 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/738fa74e-ddb6-4c59-8db5-d8c8658e06b6-goldmane-key-pair\") pod \"goldmane-7c778bb748-pxsrr\" (UID: \"738fa74e-ddb6-4c59-8db5-d8c8658e06b6\") " pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:32.634390 kubelet[3041]: I0120 01:56:32.634236 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5965d\" (UniqueName: \"kubernetes.io/projected/ac58d15d-4067-466e-a772-59e3b5476a8c-kube-api-access-5965d\") pod \"whisker-7b47f5fdc6-mt44k\" (UID: \"ac58d15d-4067-466e-a772-59e3b5476a8c\") " pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:56:32.634683 kubelet[3041]: I0120 01:56:32.634283 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15d966be-bae7-42a3-83b7-ced10b64bcb2-calico-apiserver-certs\") pod \"calico-apiserver-6576c69f97-s8m7m\" (UID: \"15d966be-bae7-42a3-83b7-ced10b64bcb2\") " pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:56:32.634683 kubelet[3041]: I0120 01:56:32.634305 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gq6t\" (UniqueName: \"kubernetes.io/projected/15d966be-bae7-42a3-83b7-ced10b64bcb2-kube-api-access-8gq6t\") pod \"calico-apiserver-6576c69f97-s8m7m\" (UID: \"15d966be-bae7-42a3-83b7-ced10b64bcb2\") " pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:56:32.634683 kubelet[3041]: I0120 01:56:32.634328 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-ca-bundle\") pod \"whisker-7b47f5fdc6-mt44k\" (UID: \"ac58d15d-4067-466e-a772-59e3b5476a8c\") " pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:56:32.634683 kubelet[3041]: I0120 01:56:32.634447 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/738fa74e-ddb6-4c59-8db5-d8c8658e06b6-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-pxsrr\" (UID: \"738fa74e-ddb6-4c59-8db5-d8c8658e06b6\") " pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:32.634683 kubelet[3041]: I0120 01:56:32.634470 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5-tigera-ca-bundle\") pod \"calico-kube-controllers-7cb6ddc686-fcv7l\" (UID: \"ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5\") " pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:56:32.634912 kubelet[3041]: I0120 01:56:32.634492 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlj9g\" (UniqueName: \"kubernetes.io/projected/ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5-kube-api-access-hlj9g\") pod \"calico-kube-controllers-7cb6ddc686-fcv7l\" (UID: \"ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5\") " pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:56:32.634912 kubelet[3041]: I0120 01:56:32.634516 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-backend-key-pair\") pod \"whisker-7b47f5fdc6-mt44k\" (UID: \"ac58d15d-4067-466e-a772-59e3b5476a8c\") " pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:56:32.634912 kubelet[3041]: I0120 01:56:32.634556 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738fa74e-ddb6-4c59-8db5-d8c8658e06b6-config\") pod \"goldmane-7c778bb748-pxsrr\" (UID: \"738fa74e-ddb6-4c59-8db5-d8c8658e06b6\") " pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:32.650528 systemd[1]: Created slice kubepods-besteffort-podac58d15d_4067_466e_a772_59e3b5476a8c.slice - libcontainer container kubepods-besteffort-podac58d15d_4067_466e_a772_59e3b5476a8c.slice. Jan 20 01:56:32.744787 systemd[1]: Created slice kubepods-besteffort-pod57304ae8_4142_4837_ab19_941e654eb081.slice - libcontainer container kubepods-besteffort-pod57304ae8_4142_4837_ab19_941e654eb081.slice. Jan 20 01:56:33.034509 kubelet[3041]: E0120 01:56:33.033631 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:33.100819 containerd[1641]: time="2026-01-20T01:56:33.099464392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:33.109966 containerd[1641]: time="2026-01-20T01:56:33.105562605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:33.126800 kubelet[3041]: E0120 01:56:33.113716 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:33.146712 containerd[1641]: time="2026-01-20T01:56:33.145582228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:33.179552 containerd[1641]: time="2026-01-20T01:56:33.171986644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:33.179552 containerd[1641]: time="2026-01-20T01:56:33.172112301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:33.222526 containerd[1641]: time="2026-01-20T01:56:33.222474392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:33.491094 containerd[1641]: time="2026-01-20T01:56:33.486597832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:33.537441 containerd[1641]: time="2026-01-20T01:56:33.537253476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:33.705433 containerd[1641]: time="2026-01-20T01:56:33.699191134Z" level=error msg="Failed to destroy network for sandbox \"1690c7ae8d44f793119a27c689817c23fbb13ecb05a6aca450ac3c6d68342523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:33.761809 containerd[1641]: time="2026-01-20T01:56:33.753322855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1690c7ae8d44f793119a27c689817c23fbb13ecb05a6aca450ac3c6d68342523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:33.763613 kubelet[3041]: E0120 01:56:33.763320 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1690c7ae8d44f793119a27c689817c23fbb13ecb05a6aca450ac3c6d68342523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:33.768013 kubelet[3041]: E0120 01:56:33.763651 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1690c7ae8d44f793119a27c689817c23fbb13ecb05a6aca450ac3c6d68342523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:56:33.768013 kubelet[3041]: E0120 01:56:33.763684 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1690c7ae8d44f793119a27c689817c23fbb13ecb05a6aca450ac3c6d68342523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:56:33.768013 kubelet[3041]: E0120 01:56:33.763755 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1690c7ae8d44f793119a27c689817c23fbb13ecb05a6aca450ac3c6d68342523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:33.848608 systemd[1]: run-netns-cni\x2d41efc48a\x2d7555\x2d8ec5\x2d5018\x2d4425bb2847e9.mount: Deactivated successfully. Jan 20 01:56:35.369487 containerd[1641]: time="2026-01-20T01:56:35.369421686Z" level=error msg="Failed to destroy network for sandbox \"6ea698fc8acf9622766e759d41b5fff285487eb888b161c794e64b5b71f53e8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.391744 systemd[1]: run-netns-cni\x2d81447b49\x2d189a\x2d5aea\x2dd2c2\x2d058ec278f37c.mount: Deactivated successfully. Jan 20 01:56:35.468521 containerd[1641]: time="2026-01-20T01:56:35.467714272Z" level=error msg="Failed to destroy network for sandbox \"2737814d3147ee318224cbf75fbc59bdedc48ff2a00528d2111ff90916828142\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.497208 containerd[1641]: time="2026-01-20T01:56:35.494323274Z" level=error msg="Failed to destroy network for sandbox \"a8ea0eea646788bd623d52d6d2021b354e1e7dda2a0719523f99e1b609ede507\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.532483 systemd[1]: run-netns-cni\x2dc6cf304b\x2de5bf\x2df6e5\x2d0476\x2d9419b455a627.mount: Deactivated successfully. Jan 20 01:56:35.544096 systemd[1]: run-netns-cni\x2d8bd928b7\x2d790c\x2d1520\x2d2298\x2dcd9e09758932.mount: Deactivated successfully. Jan 20 01:56:35.600409 containerd[1641]: time="2026-01-20T01:56:35.597195352Z" level=error msg="Failed to destroy network for sandbox \"a6df474613552ab7d240a0aa1c267020d134a3c4df912babb069c6d5fa248eda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.617677 systemd[1]: run-netns-cni\x2d7e1c2f0f\x2d37d6\x2dc160\x2dd863\x2d267736a1b03a.mount: Deactivated successfully. Jan 20 01:56:35.726979 containerd[1641]: time="2026-01-20T01:56:35.723526010Z" level=error msg="Failed to destroy network for sandbox \"2e55d04290b90aa99c64c32eca00ed00f64f9eb4cdd1fe634548665d59cb25c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.754183 systemd[1]: run-netns-cni\x2d9044d163\x2d5b64\x2d961c\x2d47cb\x2de4719a8c9e6d.mount: Deactivated successfully. Jan 20 01:56:35.772963 containerd[1641]: time="2026-01-20T01:56:35.772771556Z" level=error msg="Failed to destroy network for sandbox \"9a359d3225d9fa8a032fef8b8377c135056a4afefd281e1b015a5106ec4b65a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.843898 containerd[1641]: time="2026-01-20T01:56:35.839805726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea698fc8acf9622766e759d41b5fff285487eb888b161c794e64b5b71f53e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.844465 kubelet[3041]: E0120 01:56:35.842149 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea698fc8acf9622766e759d41b5fff285487eb888b161c794e64b5b71f53e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:35.844465 kubelet[3041]: E0120 01:56:35.842230 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea698fc8acf9622766e759d41b5fff285487eb888b161c794e64b5b71f53e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:56:35.844465 kubelet[3041]: E0120 01:56:35.842259 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ea698fc8acf9622766e759d41b5fff285487eb888b161c794e64b5b71f53e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:56:35.845294 kubelet[3041]: E0120 01:56:35.842329 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ea698fc8acf9622766e759d41b5fff285487eb888b161c794e64b5b71f53e8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b47f5fdc6-mt44k" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" Jan 20 01:56:36.008221 containerd[1641]: time="2026-01-20T01:56:36.004413435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2737814d3147ee318224cbf75fbc59bdedc48ff2a00528d2111ff90916828142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.008485 kubelet[3041]: E0120 01:56:36.006761 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2737814d3147ee318224cbf75fbc59bdedc48ff2a00528d2111ff90916828142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.008485 kubelet[3041]: E0120 01:56:36.006828 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2737814d3147ee318224cbf75fbc59bdedc48ff2a00528d2111ff90916828142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:56:36.008485 kubelet[3041]: E0120 01:56:36.006852 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2737814d3147ee318224cbf75fbc59bdedc48ff2a00528d2111ff90916828142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:56:36.015467 kubelet[3041]: E0120 01:56:36.008126 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2737814d3147ee318224cbf75fbc59bdedc48ff2a00528d2111ff90916828142\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:56:36.019870 containerd[1641]: time="2026-01-20T01:56:36.019698425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ea0eea646788bd623d52d6d2021b354e1e7dda2a0719523f99e1b609ede507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.020513 kubelet[3041]: E0120 01:56:36.020470 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ea0eea646788bd623d52d6d2021b354e1e7dda2a0719523f99e1b609ede507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.020695 kubelet[3041]: E0120 01:56:36.020671 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ea0eea646788bd623d52d6d2021b354e1e7dda2a0719523f99e1b609ede507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:56:36.020807 kubelet[3041]: E0120 01:56:36.020780 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ea0eea646788bd623d52d6d2021b354e1e7dda2a0719523f99e1b609ede507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:56:36.021131 kubelet[3041]: E0120 01:56:36.021090 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8ea0eea646788bd623d52d6d2021b354e1e7dda2a0719523f99e1b609ede507\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:56:36.034753 containerd[1641]: time="2026-01-20T01:56:36.033414332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6df474613552ab7d240a0aa1c267020d134a3c4df912babb069c6d5fa248eda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.035015 kubelet[3041]: E0120 01:56:36.034712 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6df474613552ab7d240a0aa1c267020d134a3c4df912babb069c6d5fa248eda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.048682 kubelet[3041]: E0120 01:56:36.038483 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6df474613552ab7d240a0aa1c267020d134a3c4df912babb069c6d5fa248eda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:56:36.048682 kubelet[3041]: E0120 01:56:36.038531 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6df474613552ab7d240a0aa1c267020d134a3c4df912babb069c6d5fa248eda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:56:36.048682 kubelet[3041]: E0120 01:56:36.038636 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6df474613552ab7d240a0aa1c267020d134a3c4df912babb069c6d5fa248eda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xgrtg" podUID="888a237a-dea3-4279-b3e9-e88855e903cc" Jan 20 01:56:36.078302 containerd[1641]: time="2026-01-20T01:56:36.078086899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e55d04290b90aa99c64c32eca00ed00f64f9eb4cdd1fe634548665d59cb25c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.081065 kubelet[3041]: E0120 01:56:36.080776 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e55d04290b90aa99c64c32eca00ed00f64f9eb4cdd1fe634548665d59cb25c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.081065 kubelet[3041]: E0120 01:56:36.080860 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e55d04290b90aa99c64c32eca00ed00f64f9eb4cdd1fe634548665d59cb25c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:56:36.081065 kubelet[3041]: E0120 01:56:36.080888 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e55d04290b90aa99c64c32eca00ed00f64f9eb4cdd1fe634548665d59cb25c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:56:36.081293 kubelet[3041]: E0120 01:56:36.080989 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e55d04290b90aa99c64c32eca00ed00f64f9eb4cdd1fe634548665d59cb25c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:56:36.103845 containerd[1641]: time="2026-01-20T01:56:36.102603993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a359d3225d9fa8a032fef8b8377c135056a4afefd281e1b015a5106ec4b65a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.146393 containerd[1641]: time="2026-01-20T01:56:36.128675323Z" level=error msg="Failed to destroy network for sandbox \"b30bad0bf8efd7544f81aa5114c1469e38ff6a051b22b9e77f22f5d03db3c28c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.146466 kubelet[3041]: E0120 01:56:36.113563 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a359d3225d9fa8a032fef8b8377c135056a4afefd281e1b015a5106ec4b65a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.146466 kubelet[3041]: E0120 01:56:36.113642 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a359d3225d9fa8a032fef8b8377c135056a4afefd281e1b015a5106ec4b65a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:56:36.146466 kubelet[3041]: E0120 01:56:36.113670 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a359d3225d9fa8a032fef8b8377c135056a4afefd281e1b015a5106ec4b65a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:56:36.146614 kubelet[3041]: E0120 01:56:36.113736 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a359d3225d9fa8a032fef8b8377c135056a4afefd281e1b015a5106ec4b65a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:56:36.217437 containerd[1641]: time="2026-01-20T01:56:36.213763619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30bad0bf8efd7544f81aa5114c1469e38ff6a051b22b9e77f22f5d03db3c28c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.220716 kubelet[3041]: E0120 01:56:36.214200 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30bad0bf8efd7544f81aa5114c1469e38ff6a051b22b9e77f22f5d03db3c28c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.220716 kubelet[3041]: E0120 01:56:36.214286 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30bad0bf8efd7544f81aa5114c1469e38ff6a051b22b9e77f22f5d03db3c28c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:56:36.220716 kubelet[3041]: E0120 01:56:36.214310 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b30bad0bf8efd7544f81aa5114c1469e38ff6a051b22b9e77f22f5d03db3c28c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:56:36.222205 kubelet[3041]: E0120 01:56:36.214457 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b30bad0bf8efd7544f81aa5114c1469e38ff6a051b22b9e77f22f5d03db3c28c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ddv57" podUID="b4e4578c-79c2-452b-9829-4499e381b357" Jan 20 01:56:36.347456 containerd[1641]: time="2026-01-20T01:56:36.347068657Z" level=error msg="Failed to destroy network for sandbox \"b13d057957dea94b9dc83f95c7a96e5469c79e6add8a5671daac7185c583d8c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.388555 systemd[1]: run-netns-cni\x2d34fbbc99\x2d71dd\x2d0c74\x2dd959\x2d15c52a3e6b22.mount: Deactivated successfully. Jan 20 01:56:36.388736 systemd[1]: run-netns-cni\x2d2865bc6d\x2d8296\x2d74c8\x2dbc46\x2d4eeb2d1aceea.mount: Deactivated successfully. Jan 20 01:56:36.388832 systemd[1]: run-netns-cni\x2d55217131\x2d5def\x2dc274\x2d7bd7\x2d8f79f0bf299a.mount: Deactivated successfully. Jan 20 01:56:36.425148 containerd[1641]: time="2026-01-20T01:56:36.424971103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13d057957dea94b9dc83f95c7a96e5469c79e6add8a5671daac7185c583d8c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.427582 kubelet[3041]: E0120 01:56:36.427531 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13d057957dea94b9dc83f95c7a96e5469c79e6add8a5671daac7185c583d8c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:36.430247 kubelet[3041]: E0120 01:56:36.430064 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13d057957dea94b9dc83f95c7a96e5469c79e6add8a5671daac7185c583d8c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:36.430247 kubelet[3041]: E0120 01:56:36.430112 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b13d057957dea94b9dc83f95c7a96e5469c79e6add8a5671daac7185c583d8c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:36.430247 kubelet[3041]: E0120 01:56:36.430181 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b13d057957dea94b9dc83f95c7a96e5469c79e6add8a5671daac7185c583d8c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:56:45.633530 containerd[1641]: time="2026-01-20T01:56:45.632576899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:46.449073 containerd[1641]: time="2026-01-20T01:56:46.444113064Z" level=error msg="Failed to destroy network for sandbox \"b98bf6877214db940a434fa6ccccd46261d406b409cc7550a8b1f454b4fda295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:46.458997 systemd[1]: run-netns-cni\x2dbcea2a18\x2de6db\x2ddcfd\x2d1e60\x2d00554a2a8413.mount: Deactivated successfully. Jan 20 01:56:46.496498 containerd[1641]: time="2026-01-20T01:56:46.496399108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98bf6877214db940a434fa6ccccd46261d406b409cc7550a8b1f454b4fda295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:46.497652 kubelet[3041]: E0120 01:56:46.497195 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98bf6877214db940a434fa6ccccd46261d406b409cc7550a8b1f454b4fda295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:46.497652 kubelet[3041]: E0120 01:56:46.497282 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98bf6877214db940a434fa6ccccd46261d406b409cc7550a8b1f454b4fda295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:56:46.497652 kubelet[3041]: E0120 01:56:46.497313 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98bf6877214db940a434fa6ccccd46261d406b409cc7550a8b1f454b4fda295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:56:46.498446 kubelet[3041]: E0120 01:56:46.497442 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b98bf6877214db940a434fa6ccccd46261d406b409cc7550a8b1f454b4fda295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:56:46.561676 containerd[1641]: time="2026-01-20T01:56:46.558049684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:46.603396 containerd[1641]: time="2026-01-20T01:56:46.590635160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:47.599632 kubelet[3041]: E0120 01:56:47.580715 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:47.627838 containerd[1641]: time="2026-01-20T01:56:47.602063176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:47.627838 containerd[1641]: time="2026-01-20T01:56:47.608922924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:47.700654 containerd[1641]: time="2026-01-20T01:56:47.689438369Z" level=error msg="Failed to destroy network for sandbox \"c0e5b37c75dc93df6385831b06053478e27e196447fbd2b383da0b11dd64349c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:47.739929 systemd[1]: run-netns-cni\x2d74db142d\x2db269\x2d3252\x2daf3d\x2de3fa0e047c8b.mount: Deactivated successfully. Jan 20 01:56:48.106024 containerd[1641]: time="2026-01-20T01:56:48.104185851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0e5b37c75dc93df6385831b06053478e27e196447fbd2b383da0b11dd64349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:48.115662 kubelet[3041]: E0120 01:56:48.114909 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0e5b37c75dc93df6385831b06053478e27e196447fbd2b383da0b11dd64349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:48.115662 kubelet[3041]: E0120 01:56:48.115281 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0e5b37c75dc93df6385831b06053478e27e196447fbd2b383da0b11dd64349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:56:48.115662 kubelet[3041]: E0120 01:56:48.115414 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0e5b37c75dc93df6385831b06053478e27e196447fbd2b383da0b11dd64349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:56:48.122262 kubelet[3041]: E0120 01:56:48.117687 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0e5b37c75dc93df6385831b06053478e27e196447fbd2b383da0b11dd64349c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b47f5fdc6-mt44k" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" Jan 20 01:56:48.130943 containerd[1641]: time="2026-01-20T01:56:48.127781957Z" level=error msg="Failed to destroy network for sandbox \"02de50b8b66d1c58bde15c0c14af62423afa5bfa8b5ca18d30919f031dbd2cdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:48.144931 systemd[1]: run-netns-cni\x2d2a5033c0\x2d6bdd\x2ddb17\x2d6572\x2db431023ed4c0.mount: Deactivated successfully. Jan 20 01:56:48.178636 containerd[1641]: time="2026-01-20T01:56:48.178435658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02de50b8b66d1c58bde15c0c14af62423afa5bfa8b5ca18d30919f031dbd2cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:48.182310 kubelet[3041]: E0120 01:56:48.179387 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02de50b8b66d1c58bde15c0c14af62423afa5bfa8b5ca18d30919f031dbd2cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:48.182310 kubelet[3041]: E0120 01:56:48.181571 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02de50b8b66d1c58bde15c0c14af62423afa5bfa8b5ca18d30919f031dbd2cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:56:48.182310 kubelet[3041]: E0120 01:56:48.181684 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02de50b8b66d1c58bde15c0c14af62423afa5bfa8b5ca18d30919f031dbd2cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:56:48.182642 kubelet[3041]: E0120 01:56:48.182037 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02de50b8b66d1c58bde15c0c14af62423afa5bfa8b5ca18d30919f031dbd2cdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:56:48.648051 containerd[1641]: time="2026-01-20T01:56:48.647846127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:49.416441 containerd[1641]: time="2026-01-20T01:56:49.374157350Z" level=error msg="Failed to destroy network for sandbox \"89e08d17c0ab31a7020e68e8ee2153d91e45b278dfaa9e7040a22644e35e9966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.416441 containerd[1641]: time="2026-01-20T01:56:49.391861787Z" level=error msg="Failed to destroy network for sandbox \"23d0bc659671f071746e90bfc2e0eb23d4e2b3c34009319763c82c7598f98780\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.423845 systemd[1]: run-netns-cni\x2d9a020686\x2dd557\x2d0330\x2db67d\x2dd4b43b842c85.mount: Deactivated successfully. Jan 20 01:56:49.441846 systemd[1]: run-netns-cni\x2dd2af9635\x2de343\x2dc043\x2da222\x2d2f641a4019aa.mount: Deactivated successfully. Jan 20 01:56:49.496606 containerd[1641]: time="2026-01-20T01:56:49.493755650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d0bc659671f071746e90bfc2e0eb23d4e2b3c34009319763c82c7598f98780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.496867 kubelet[3041]: E0120 01:56:49.495514 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d0bc659671f071746e90bfc2e0eb23d4e2b3c34009319763c82c7598f98780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.496867 kubelet[3041]: E0120 01:56:49.495588 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d0bc659671f071746e90bfc2e0eb23d4e2b3c34009319763c82c7598f98780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:56:49.496867 kubelet[3041]: E0120 01:56:49.495619 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d0bc659671f071746e90bfc2e0eb23d4e2b3c34009319763c82c7598f98780\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:56:49.497546 kubelet[3041]: E0120 01:56:49.495684 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23d0bc659671f071746e90bfc2e0eb23d4e2b3c34009319763c82c7598f98780\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xgrtg" podUID="888a237a-dea3-4279-b3e9-e88855e903cc" Jan 20 01:56:49.518729 containerd[1641]: time="2026-01-20T01:56:49.515695352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e08d17c0ab31a7020e68e8ee2153d91e45b278dfaa9e7040a22644e35e9966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.554597 kubelet[3041]: E0120 01:56:49.535216 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e08d17c0ab31a7020e68e8ee2153d91e45b278dfaa9e7040a22644e35e9966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:49.554597 kubelet[3041]: E0120 01:56:49.535303 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e08d17c0ab31a7020e68e8ee2153d91e45b278dfaa9e7040a22644e35e9966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:56:49.554597 kubelet[3041]: E0120 01:56:49.535429 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89e08d17c0ab31a7020e68e8ee2153d91e45b278dfaa9e7040a22644e35e9966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:56:49.554857 kubelet[3041]: E0120 01:56:49.535507 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89e08d17c0ab31a7020e68e8ee2153d91e45b278dfaa9e7040a22644e35e9966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:56:52.417995 kubelet[3041]: E0120 01:56:52.242300 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:56:54.149607 containerd[1641]: time="2026-01-20T01:56:54.149057643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:56:54.288237 containerd[1641]: time="2026-01-20T01:56:54.287652019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:56:54.349279 kubelet[3041]: E0120 01:56:54.343190 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.102s" Jan 20 01:56:54.398757 containerd[1641]: time="2026-01-20T01:56:54.398079552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:56:54.441589 containerd[1641]: time="2026-01-20T01:56:54.435818788Z" level=error msg="Failed to destroy network for sandbox \"03bd87e9c1698d6c0c05d576819a32b5307633de316bdcd568a6db118a9e988d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:54.492993 systemd[1]: run-netns-cni\x2d6acd431f\x2d31b1\x2dbd69\x2d7491\x2dfbe2f7b56306.mount: Deactivated successfully. Jan 20 01:56:54.577247 containerd[1641]: time="2026-01-20T01:56:54.573115154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bd87e9c1698d6c0c05d576819a32b5307633de316bdcd568a6db118a9e988d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:54.580507 kubelet[3041]: E0120 01:56:54.578060 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bd87e9c1698d6c0c05d576819a32b5307633de316bdcd568a6db118a9e988d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:54.580507 kubelet[3041]: E0120 01:56:54.578182 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bd87e9c1698d6c0c05d576819a32b5307633de316bdcd568a6db118a9e988d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:56:54.580507 kubelet[3041]: E0120 01:56:54.578211 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03bd87e9c1698d6c0c05d576819a32b5307633de316bdcd568a6db118a9e988d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:56:54.581408 kubelet[3041]: E0120 01:56:54.578523 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03bd87e9c1698d6c0c05d576819a32b5307633de316bdcd568a6db118a9e988d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:56:55.266491 containerd[1641]: time="2026-01-20T01:56:55.264289186Z" level=error msg="Failed to destroy network for sandbox \"1baa476b842de2905abe8c9d9d6e9316e3edda3b717b376c3e40adf74ef4518b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.279836 systemd[1]: run-netns-cni\x2d18729325\x2ddcdb\x2df81d\x2d15b6\x2d90e23cf4f7c5.mount: Deactivated successfully. Jan 20 01:56:55.295566 containerd[1641]: time="2026-01-20T01:56:55.289147879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baa476b842de2905abe8c9d9d6e9316e3edda3b717b376c3e40adf74ef4518b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.295821 kubelet[3041]: E0120 01:56:55.291399 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baa476b842de2905abe8c9d9d6e9316e3edda3b717b376c3e40adf74ef4518b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.295821 kubelet[3041]: E0120 01:56:55.291507 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baa476b842de2905abe8c9d9d6e9316e3edda3b717b376c3e40adf74ef4518b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:56:55.295821 kubelet[3041]: E0120 01:56:55.291534 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1baa476b842de2905abe8c9d9d6e9316e3edda3b717b376c3e40adf74ef4518b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:56:55.295982 kubelet[3041]: E0120 01:56:55.291598 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1baa476b842de2905abe8c9d9d6e9316e3edda3b717b376c3e40adf74ef4518b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ddv57" podUID="b4e4578c-79c2-452b-9829-4499e381b357" Jan 20 01:56:55.403445 containerd[1641]: time="2026-01-20T01:56:55.402588840Z" level=error msg="Failed to destroy network for sandbox \"b372d50ff64e177530578eadef09201328de53690846982d3c6ad621fbf5045c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.415314 systemd[1]: run-netns-cni\x2d1799302d\x2dee6c\x2d8292\x2df729\x2db5048cd03e02.mount: Deactivated successfully. Jan 20 01:56:55.433251 containerd[1641]: time="2026-01-20T01:56:55.431407555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b372d50ff64e177530578eadef09201328de53690846982d3c6ad621fbf5045c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.433600 kubelet[3041]: E0120 01:56:55.431742 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b372d50ff64e177530578eadef09201328de53690846982d3c6ad621fbf5045c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.433600 kubelet[3041]: E0120 01:56:55.431820 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b372d50ff64e177530578eadef09201328de53690846982d3c6ad621fbf5045c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:56:55.433600 kubelet[3041]: E0120 01:56:55.431858 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b372d50ff64e177530578eadef09201328de53690846982d3c6ad621fbf5045c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:56:55.437855 kubelet[3041]: E0120 01:56:55.431921 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b372d50ff64e177530578eadef09201328de53690846982d3c6ad621fbf5045c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:56:55.614274 containerd[1641]: time="2026-01-20T01:56:55.600817738Z" level=error msg="Failed to destroy network for sandbox \"73848480af29abbb321f62c143101de34fcff9eb850addfd166e1c805a235983\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.622256 containerd[1641]: time="2026-01-20T01:56:55.620630559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73848480af29abbb321f62c143101de34fcff9eb850addfd166e1c805a235983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.622545 kubelet[3041]: E0120 01:56:55.621158 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73848480af29abbb321f62c143101de34fcff9eb850addfd166e1c805a235983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:56:55.622545 kubelet[3041]: E0120 01:56:55.621236 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73848480af29abbb321f62c143101de34fcff9eb850addfd166e1c805a235983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:55.622545 kubelet[3041]: E0120 01:56:55.621264 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73848480af29abbb321f62c143101de34fcff9eb850addfd166e1c805a235983\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:56:55.622691 kubelet[3041]: E0120 01:56:55.621742 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73848480af29abbb321f62c143101de34fcff9eb850addfd166e1c805a235983\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:56:55.623301 systemd[1]: run-netns-cni\x2d7644aecc\x2dbd5b\x2d9723\x2d96a9\x2d1f2b88c00e32.mount: Deactivated successfully. Jan 20 01:57:08.182033 kubelet[3041]: E0120 01:57:08.143566 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.615s" Jan 20 01:57:08.367258 containerd[1641]: time="2026-01-20T01:57:08.366313134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:08.429056 containerd[1641]: time="2026-01-20T01:57:08.428872404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:08.471065 kubelet[3041]: E0120 01:57:08.452967 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:08.479740 containerd[1641]: time="2026-01-20T01:57:08.479689354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:08.540035 containerd[1641]: time="2026-01-20T01:57:08.539982081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:08.587029 containerd[1641]: time="2026-01-20T01:57:08.586889445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:08.705731 containerd[1641]: time="2026-01-20T01:57:08.697164445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:08.707215 containerd[1641]: time="2026-01-20T01:57:08.707120020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:09.721499 kubelet[3041]: E0120 01:57:09.719093 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:09.817931 containerd[1641]: time="2026-01-20T01:57:09.817737559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:10.550807 containerd[1641]: time="2026-01-20T01:57:10.546622370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:11.637791 kubelet[3041]: E0120 01:57:11.637742 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:11.652214 kubelet[3041]: E0120 01:57:11.645131 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:11.689764 containerd[1641]: time="2026-01-20T01:57:11.689704963Z" level=error msg="Failed to destroy network for sandbox \"7638111d8c5bacb78954ee029056e5144b0cf4b06719a7dd6d0b81ff6cc56219\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:11.709185 systemd[1]: run-netns-cni\x2da18d4082\x2d13f1\x2d2b0c\x2d2617\x2d086b26454470.mount: Deactivated successfully. Jan 20 01:57:11.846682 containerd[1641]: time="2026-01-20T01:57:11.846423151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7638111d8c5bacb78954ee029056e5144b0cf4b06719a7dd6d0b81ff6cc56219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:11.857224 kubelet[3041]: E0120 01:57:11.857157 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7638111d8c5bacb78954ee029056e5144b0cf4b06719a7dd6d0b81ff6cc56219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:11.857985 kubelet[3041]: E0120 01:57:11.857780 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7638111d8c5bacb78954ee029056e5144b0cf4b06719a7dd6d0b81ff6cc56219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:57:11.857985 kubelet[3041]: E0120 01:57:11.857819 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7638111d8c5bacb78954ee029056e5144b0cf4b06719a7dd6d0b81ff6cc56219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:57:11.857985 kubelet[3041]: E0120 01:57:11.857886 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7638111d8c5bacb78954ee029056e5144b0cf4b06719a7dd6d0b81ff6cc56219\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:57:12.733708 containerd[1641]: time="2026-01-20T01:57:12.733641388Z" level=error msg="Failed to destroy network for sandbox \"7cc35b454152421467dedde14838e2818d5b3612645c488c7c8fa4bc863b3225\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:12.833567 systemd[1]: run-netns-cni\x2ded65183e\x2de02c\x2db233\x2dfe8f\x2daf2c2cf7afdc.mount: Deactivated successfully. Jan 20 01:57:13.037668 containerd[1641]: time="2026-01-20T01:57:12.945028778Z" level=error msg="Failed to destroy network for sandbox \"140d06bce4c39cf83ea494ca3c58689b65997da0185d547d64d25e80013c7dc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.043991 systemd[1]: run-netns-cni\x2d97d187dd\x2df798\x2da366\x2de7e5\x2ddcc70f9b6ceb.mount: Deactivated successfully. Jan 20 01:57:13.060831 containerd[1641]: time="2026-01-20T01:57:13.060648793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cc35b454152421467dedde14838e2818d5b3612645c488c7c8fa4bc863b3225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.112084 kubelet[3041]: E0120 01:57:13.095791 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cc35b454152421467dedde14838e2818d5b3612645c488c7c8fa4bc863b3225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.112084 kubelet[3041]: E0120 01:57:13.095866 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cc35b454152421467dedde14838e2818d5b3612645c488c7c8fa4bc863b3225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:57:13.112084 kubelet[3041]: E0120 01:57:13.095892 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cc35b454152421467dedde14838e2818d5b3612645c488c7c8fa4bc863b3225\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:57:13.116389 kubelet[3041]: E0120 01:57:13.095957 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cc35b454152421467dedde14838e2818d5b3612645c488c7c8fa4bc863b3225\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:57:13.137612 containerd[1641]: time="2026-01-20T01:57:13.124944269Z" level=error msg="Failed to destroy network for sandbox \"97714e46223090dd1c2feaed3625890b176cf597973a24b1b04c0d906975edb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.181813 systemd[1]: run-netns-cni\x2d26f58ee7\x2d0d36\x2d4f7c\x2d6c86\x2db19e15f9f811.mount: Deactivated successfully. Jan 20 01:57:13.231667 containerd[1641]: time="2026-01-20T01:57:13.230545701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"140d06bce4c39cf83ea494ca3c58689b65997da0185d547d64d25e80013c7dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.236259 kubelet[3041]: E0120 01:57:13.234164 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"140d06bce4c39cf83ea494ca3c58689b65997da0185d547d64d25e80013c7dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.236259 kubelet[3041]: E0120 01:57:13.234260 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"140d06bce4c39cf83ea494ca3c58689b65997da0185d547d64d25e80013c7dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:57:13.236259 kubelet[3041]: E0120 01:57:13.234553 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"140d06bce4c39cf83ea494ca3c58689b65997da0185d547d64d25e80013c7dc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:57:13.237174 kubelet[3041]: E0120 01:57:13.234617 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"140d06bce4c39cf83ea494ca3c58689b65997da0185d547d64d25e80013c7dc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xgrtg" podUID="888a237a-dea3-4279-b3e9-e88855e903cc" Jan 20 01:57:13.300055 containerd[1641]: time="2026-01-20T01:57:13.297662475Z" level=error msg="Failed to destroy network for sandbox \"714684203179d0a6c7ccd781b4c36c338e7f0339d631c160cd26a5bcb27e92b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.334723 systemd[1]: run-netns-cni\x2d11d8fcb4\x2db5f1\x2d4dbb\x2dec6f\x2d445064cc514d.mount: Deactivated successfully. Jan 20 01:57:13.392646 containerd[1641]: time="2026-01-20T01:57:13.313307725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97714e46223090dd1c2feaed3625890b176cf597973a24b1b04c0d906975edb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.396568 kubelet[3041]: E0120 01:57:13.393237 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97714e46223090dd1c2feaed3625890b176cf597973a24b1b04c0d906975edb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.409003 kubelet[3041]: E0120 01:57:13.397102 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97714e46223090dd1c2feaed3625890b176cf597973a24b1b04c0d906975edb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:57:13.409003 kubelet[3041]: E0120 01:57:13.397226 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97714e46223090dd1c2feaed3625890b176cf597973a24b1b04c0d906975edb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:57:13.409771 containerd[1641]: time="2026-01-20T01:57:13.407586541Z" level=error msg="Failed to destroy network for sandbox \"89dc7040486a4e79fca2ba8d107f72d2a678a6f8e697503cc3c4c13bd273cd60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.432842 kubelet[3041]: E0120 01:57:13.432780 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97714e46223090dd1c2feaed3625890b176cf597973a24b1b04c0d906975edb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:57:13.437820 systemd[1]: run-netns-cni\x2d8ace129b\x2dea8b\x2d248d\x2d9d72\x2d3ba8b14799ce.mount: Deactivated successfully. Jan 20 01:57:13.502638 containerd[1641]: time="2026-01-20T01:57:13.483929598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"714684203179d0a6c7ccd781b4c36c338e7f0339d631c160cd26a5bcb27e92b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.503174 kubelet[3041]: E0120 01:57:13.503124 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714684203179d0a6c7ccd781b4c36c338e7f0339d631c160cd26a5bcb27e92b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.505629 kubelet[3041]: E0120 01:57:13.505533 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714684203179d0a6c7ccd781b4c36c338e7f0339d631c160cd26a5bcb27e92b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:57:13.505903 kubelet[3041]: E0120 01:57:13.505807 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714684203179d0a6c7ccd781b4c36c338e7f0339d631c160cd26a5bcb27e92b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:57:13.517592 containerd[1641]: time="2026-01-20T01:57:13.516058812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89dc7040486a4e79fca2ba8d107f72d2a678a6f8e697503cc3c4c13bd273cd60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.517818 kubelet[3041]: E0120 01:57:13.516515 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89dc7040486a4e79fca2ba8d107f72d2a678a6f8e697503cc3c4c13bd273cd60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.517818 kubelet[3041]: E0120 01:57:13.516585 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89dc7040486a4e79fca2ba8d107f72d2a678a6f8e697503cc3c4c13bd273cd60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:57:13.517818 kubelet[3041]: E0120 01:57:13.516608 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89dc7040486a4e79fca2ba8d107f72d2a678a6f8e697503cc3c4c13bd273cd60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:57:13.518063 kubelet[3041]: E0120 01:57:13.516666 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89dc7040486a4e79fca2ba8d107f72d2a678a6f8e697503cc3c4c13bd273cd60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ddv57" podUID="b4e4578c-79c2-452b-9829-4499e381b357" Jan 20 01:57:13.518578 kubelet[3041]: E0120 01:57:13.518534 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"714684203179d0a6c7ccd781b4c36c338e7f0339d631c160cd26a5bcb27e92b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:57:13.774898 containerd[1641]: time="2026-01-20T01:57:13.759843821Z" level=error msg="Failed to destroy network for sandbox \"25f9fbf1c480d5f79e64ea331604472d43f63c9e91e9efb32da8751c444f09ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.780113 systemd[1]: run-netns-cni\x2d3af73b78\x2d60a3\x2d95dc\x2d260b\x2d7e0395b679fe.mount: Deactivated successfully. Jan 20 01:57:13.795194 containerd[1641]: time="2026-01-20T01:57:13.787034994Z" level=error msg="Failed to destroy network for sandbox \"853d62cdb119ae855f5701263271895be3179f51cdc04ac9adc71de2f6b3f02f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.832492 systemd[1]: run-netns-cni\x2d3040c2ff\x2d0000\x2d2c3b\x2d696f\x2d25b9b56541ee.mount: Deactivated successfully. Jan 20 01:57:13.854663 containerd[1641]: time="2026-01-20T01:57:13.853924706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25f9fbf1c480d5f79e64ea331604472d43f63c9e91e9efb32da8751c444f09ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.877177 kubelet[3041]: E0120 01:57:13.876595 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25f9fbf1c480d5f79e64ea331604472d43f63c9e91e9efb32da8751c444f09ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.877177 kubelet[3041]: E0120 01:57:13.876679 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25f9fbf1c480d5f79e64ea331604472d43f63c9e91e9efb32da8751c444f09ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:57:13.877177 kubelet[3041]: E0120 01:57:13.876711 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25f9fbf1c480d5f79e64ea331604472d43f63c9e91e9efb32da8751c444f09ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:57:13.877614 kubelet[3041]: E0120 01:57:13.877524 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25f9fbf1c480d5f79e64ea331604472d43f63c9e91e9efb32da8751c444f09ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b47f5fdc6-mt44k" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" Jan 20 01:57:13.912588 containerd[1641]: time="2026-01-20T01:57:13.903561069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"853d62cdb119ae855f5701263271895be3179f51cdc04ac9adc71de2f6b3f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.912829 kubelet[3041]: E0120 01:57:13.907935 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853d62cdb119ae855f5701263271895be3179f51cdc04ac9adc71de2f6b3f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:13.921129 kubelet[3041]: E0120 01:57:13.918417 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853d62cdb119ae855f5701263271895be3179f51cdc04ac9adc71de2f6b3f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:57:13.921129 kubelet[3041]: E0120 01:57:13.920454 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"853d62cdb119ae855f5701263271895be3179f51cdc04ac9adc71de2f6b3f02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:57:13.921129 kubelet[3041]: E0120 01:57:13.920698 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"853d62cdb119ae855f5701263271895be3179f51cdc04ac9adc71de2f6b3f02f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:57:14.206462 containerd[1641]: time="2026-01-20T01:57:14.206062963Z" level=error msg="Failed to destroy network for sandbox \"e5bb93fc6a3c48a517f65f4aaf273d07d038c61a1743ed7a124519253fe4554b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:14.238830 systemd[1]: run-netns-cni\x2da60b1949\x2d1899\x2d01e4\x2d7c29\x2dbcb5d82f2cd8.mount: Deactivated successfully. Jan 20 01:57:14.271973 containerd[1641]: time="2026-01-20T01:57:14.269876581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5bb93fc6a3c48a517f65f4aaf273d07d038c61a1743ed7a124519253fe4554b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:14.272212 kubelet[3041]: E0120 01:57:14.271096 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5bb93fc6a3c48a517f65f4aaf273d07d038c61a1743ed7a124519253fe4554b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:14.272212 kubelet[3041]: E0120 01:57:14.271325 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5bb93fc6a3c48a517f65f4aaf273d07d038c61a1743ed7a124519253fe4554b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:57:14.272212 kubelet[3041]: E0120 01:57:14.271437 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5bb93fc6a3c48a517f65f4aaf273d07d038c61a1743ed7a124519253fe4554b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:57:14.286100 kubelet[3041]: E0120 01:57:14.271727 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5bb93fc6a3c48a517f65f4aaf273d07d038c61a1743ed7a124519253fe4554b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:57:15.557147 kubelet[3041]: E0120 01:57:15.546544 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:23.543163 containerd[1641]: time="2026-01-20T01:57:23.542773484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:24.072955 containerd[1641]: time="2026-01-20T01:57:24.061011641Z" level=error msg="Failed to destroy network for sandbox \"a90af7858c469f5257b0d1cbf08fe0f65fce52a813afd25ed62c27c104e522b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:24.075758 systemd[1]: run-netns-cni\x2d74fefb75\x2d9468\x2da234\x2d6f15\x2df59db15b0494.mount: Deactivated successfully. Jan 20 01:57:24.150004 containerd[1641]: time="2026-01-20T01:57:24.149801627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90af7858c469f5257b0d1cbf08fe0f65fce52a813afd25ed62c27c104e522b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:24.152264 kubelet[3041]: E0120 01:57:24.151704 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90af7858c469f5257b0d1cbf08fe0f65fce52a813afd25ed62c27c104e522b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:24.152264 kubelet[3041]: E0120 01:57:24.151785 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90af7858c469f5257b0d1cbf08fe0f65fce52a813afd25ed62c27c104e522b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:57:24.152264 kubelet[3041]: E0120 01:57:24.151811 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a90af7858c469f5257b0d1cbf08fe0f65fce52a813afd25ed62c27c104e522b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:57:24.153003 kubelet[3041]: E0120 01:57:24.151908 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a90af7858c469f5257b0d1cbf08fe0f65fce52a813afd25ed62c27c104e522b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:57:24.594541 containerd[1641]: time="2026-01-20T01:57:24.566051205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:25.129892 containerd[1641]: time="2026-01-20T01:57:25.125628066Z" level=error msg="Failed to destroy network for sandbox \"e34263b019c8a2986f203421e5a579109184af4fb000bc63c860d11209ad42d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:25.157557 systemd[1]: run-netns-cni\x2d20199a7f\x2d2304\x2d13bb\x2d9142\x2d18a150c20091.mount: Deactivated successfully. Jan 20 01:57:25.168458 containerd[1641]: time="2026-01-20T01:57:25.166732759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34263b019c8a2986f203421e5a579109184af4fb000bc63c860d11209ad42d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:25.168712 kubelet[3041]: E0120 01:57:25.167491 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34263b019c8a2986f203421e5a579109184af4fb000bc63c860d11209ad42d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:25.168712 kubelet[3041]: E0120 01:57:25.167574 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34263b019c8a2986f203421e5a579109184af4fb000bc63c860d11209ad42d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:57:25.168712 kubelet[3041]: E0120 01:57:25.167603 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34263b019c8a2986f203421e5a579109184af4fb000bc63c860d11209ad42d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:57:25.169410 kubelet[3041]: E0120 01:57:25.167668 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e34263b019c8a2986f203421e5a579109184af4fb000bc63c860d11209ad42d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b47f5fdc6-mt44k" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" Jan 20 01:57:25.642268 containerd[1641]: time="2026-01-20T01:57:25.642112651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:25.679308 kubelet[3041]: E0120 01:57:25.673018 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:25.687377 containerd[1641]: time="2026-01-20T01:57:25.680074783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:26.574312 containerd[1641]: time="2026-01-20T01:57:26.569249085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:26.643954 containerd[1641]: time="2026-01-20T01:57:26.636219967Z" level=error msg="Failed to destroy network for sandbox \"adaa429dd6718835310e123ff2e618f370f28c4809df83c28c84e6d87f2846c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:26.719177 systemd[1]: run-netns-cni\x2da4700769\x2dfc07\x2dfa69\x2dd243\x2d9418e8a72912.mount: Deactivated successfully. Jan 20 01:57:26.964495 containerd[1641]: time="2026-01-20T01:57:26.962507432Z" level=error msg="Failed to destroy network for sandbox \"3d0da703c661591fc2f54225d01d8609f5ce805822072c758813825b4381dabb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:26.994545 systemd[1]: run-netns-cni\x2dd62e1d7c\x2d2320\x2dae07\x2d3443\x2dbb646782d2c3.mount: Deactivated successfully. Jan 20 01:57:27.008556 containerd[1641]: time="2026-01-20T01:57:27.000538697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"adaa429dd6718835310e123ff2e618f370f28c4809df83c28c84e6d87f2846c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:27.015244 kubelet[3041]: E0120 01:57:27.014656 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adaa429dd6718835310e123ff2e618f370f28c4809df83c28c84e6d87f2846c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:27.015244 kubelet[3041]: E0120 01:57:27.014787 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adaa429dd6718835310e123ff2e618f370f28c4809df83c28c84e6d87f2846c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:57:27.015244 kubelet[3041]: E0120 01:57:27.014818 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adaa429dd6718835310e123ff2e618f370f28c4809df83c28c84e6d87f2846c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:57:27.016406 kubelet[3041]: E0120 01:57:27.014880 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adaa429dd6718835310e123ff2e618f370f28c4809df83c28c84e6d87f2846c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:57:27.038050 containerd[1641]: time="2026-01-20T01:57:27.032714997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0da703c661591fc2f54225d01d8609f5ce805822072c758813825b4381dabb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:27.038316 kubelet[3041]: E0120 01:57:27.037401 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0da703c661591fc2f54225d01d8609f5ce805822072c758813825b4381dabb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:27.038316 kubelet[3041]: E0120 01:57:27.037478 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0da703c661591fc2f54225d01d8609f5ce805822072c758813825b4381dabb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:57:27.038316 kubelet[3041]: E0120 01:57:27.037508 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d0da703c661591fc2f54225d01d8609f5ce805822072c758813825b4381dabb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:57:27.038573 kubelet[3041]: E0120 01:57:27.037577 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d0da703c661591fc2f54225d01d8609f5ce805822072c758813825b4381dabb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ddv57" podUID="b4e4578c-79c2-452b-9829-4499e381b357" Jan 20 01:57:27.399624 containerd[1641]: time="2026-01-20T01:57:27.397767585Z" level=error msg="Failed to destroy network for sandbox \"1ae0072600f47ab60487aa2100674923823a3416cd2691a1ee4111cb38b930b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:27.411518 systemd[1]: run-netns-cni\x2ddb73cc33\x2d9cfe\x2db529\x2d73bb\x2de8e7b1fe7580.mount: Deactivated successfully. Jan 20 01:57:27.449826 containerd[1641]: time="2026-01-20T01:57:27.449437545Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae0072600f47ab60487aa2100674923823a3416cd2691a1ee4111cb38b930b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:27.453540 kubelet[3041]: E0120 01:57:27.453482 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae0072600f47ab60487aa2100674923823a3416cd2691a1ee4111cb38b930b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:27.454270 kubelet[3041]: E0120 01:57:27.453813 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae0072600f47ab60487aa2100674923823a3416cd2691a1ee4111cb38b930b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:57:27.454270 kubelet[3041]: E0120 01:57:27.453855 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ae0072600f47ab60487aa2100674923823a3416cd2691a1ee4111cb38b930b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:57:27.454270 kubelet[3041]: E0120 01:57:27.453929 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ae0072600f47ab60487aa2100674923823a3416cd2691a1ee4111cb38b930b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:57:27.555210 containerd[1641]: time="2026-01-20T01:57:27.555157873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:27.570235 kubelet[3041]: E0120 01:57:27.567249 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:27.573194 containerd[1641]: time="2026-01-20T01:57:27.572661602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:28.614403 containerd[1641]: time="2026-01-20T01:57:28.607788120Z" level=error msg="Failed to destroy network for sandbox \"64605dc909d53a566ceeef745756d79b48749e6a1e41410cf61d34ff2007dd92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:28.639601 containerd[1641]: time="2026-01-20T01:57:28.639551726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:28.646190 systemd[1]: run-netns-cni\x2d3dd5d9f8\x2d1bdc\x2d3d63\x2d59d9\x2d01bd2720add4.mount: Deactivated successfully. Jan 20 01:57:28.674446 containerd[1641]: time="2026-01-20T01:57:28.663037775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64605dc909d53a566ceeef745756d79b48749e6a1e41410cf61d34ff2007dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:28.674446 containerd[1641]: time="2026-01-20T01:57:28.663757973Z" level=error msg="Failed to destroy network for sandbox \"eae65c071dc477b20260c3b014cf2ab5720012dce7c1959a82a20e81afc4b833\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:28.676199 kubelet[3041]: E0120 01:57:28.666449 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64605dc909d53a566ceeef745756d79b48749e6a1e41410cf61d34ff2007dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:28.676199 kubelet[3041]: E0120 01:57:28.666547 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64605dc909d53a566ceeef745756d79b48749e6a1e41410cf61d34ff2007dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:57:28.676199 kubelet[3041]: E0120 01:57:28.666578 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64605dc909d53a566ceeef745756d79b48749e6a1e41410cf61d34ff2007dd92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:57:28.693410 kubelet[3041]: E0120 01:57:28.666669 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64605dc909d53a566ceeef745756d79b48749e6a1e41410cf61d34ff2007dd92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:57:28.718491 systemd[1]: run-netns-cni\x2d1873ad8c\x2d138d\x2ddec4\x2d753e\x2da0513db2dd76.mount: Deactivated successfully. Jan 20 01:57:28.796578 containerd[1641]: time="2026-01-20T01:57:28.796462425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae65c071dc477b20260c3b014cf2ab5720012dce7c1959a82a20e81afc4b833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:28.800421 kubelet[3041]: E0120 01:57:28.798828 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae65c071dc477b20260c3b014cf2ab5720012dce7c1959a82a20e81afc4b833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:28.800421 kubelet[3041]: E0120 01:57:28.798921 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae65c071dc477b20260c3b014cf2ab5720012dce7c1959a82a20e81afc4b833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:57:28.800421 kubelet[3041]: E0120 01:57:28.798946 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae65c071dc477b20260c3b014cf2ab5720012dce7c1959a82a20e81afc4b833\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:57:28.800597 kubelet[3041]: E0120 01:57:28.799018 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eae65c071dc477b20260c3b014cf2ab5720012dce7c1959a82a20e81afc4b833\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xgrtg" podUID="888a237a-dea3-4279-b3e9-e88855e903cc" Jan 20 01:57:29.369269 containerd[1641]: time="2026-01-20T01:57:29.368863261Z" level=error msg="Failed to destroy network for sandbox \"3d1974fb879aa31c50a8f265869fbb042a7eaf24a2705741249d0b8610c58390\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:29.385934 systemd[1]: run-netns-cni\x2dd83e9264\x2db171\x2df85c\x2d72ce\x2dced6b44e700f.mount: Deactivated successfully. Jan 20 01:57:29.413738 containerd[1641]: time="2026-01-20T01:57:29.412589967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1974fb879aa31c50a8f265869fbb042a7eaf24a2705741249d0b8610c58390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:29.414011 kubelet[3041]: E0120 01:57:29.412909 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1974fb879aa31c50a8f265869fbb042a7eaf24a2705741249d0b8610c58390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:29.414011 kubelet[3041]: E0120 01:57:29.412977 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1974fb879aa31c50a8f265869fbb042a7eaf24a2705741249d0b8610c58390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:57:29.414011 kubelet[3041]: E0120 01:57:29.413005 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d1974fb879aa31c50a8f265869fbb042a7eaf24a2705741249d0b8610c58390\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:57:29.414184 kubelet[3041]: E0120 01:57:29.413080 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d1974fb879aa31c50a8f265869fbb042a7eaf24a2705741249d0b8610c58390\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:57:29.584386 containerd[1641]: time="2026-01-20T01:57:29.584076912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:30.134764 containerd[1641]: time="2026-01-20T01:57:30.132199820Z" level=error msg="Failed to destroy network for sandbox \"528493e764464520828552bf421c5ea1fed63ea6e9aa638aec682519e47e27ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:30.142484 systemd[1]: run-netns-cni\x2da387585a\x2d790d\x2d903a\x2dc245\x2da8c5112cbb56.mount: Deactivated successfully. Jan 20 01:57:30.160271 containerd[1641]: time="2026-01-20T01:57:30.157578977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"528493e764464520828552bf421c5ea1fed63ea6e9aa638aec682519e47e27ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:30.160542 kubelet[3041]: E0120 01:57:30.158468 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528493e764464520828552bf421c5ea1fed63ea6e9aa638aec682519e47e27ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:30.160542 kubelet[3041]: E0120 01:57:30.158536 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528493e764464520828552bf421c5ea1fed63ea6e9aa638aec682519e47e27ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:57:30.160542 kubelet[3041]: E0120 01:57:30.158561 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"528493e764464520828552bf421c5ea1fed63ea6e9aa638aec682519e47e27ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:57:30.161058 kubelet[3041]: E0120 01:57:30.158664 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"528493e764464520828552bf421c5ea1fed63ea6e9aa638aec682519e47e27ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:57:31.528411 kubelet[3041]: E0120 01:57:31.528158 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:39.275744 containerd[1641]: time="2026-01-20T01:57:37.324977569Z" level=info msg="container event discarded" container=48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f type=CONTAINER_CREATED_EVENT Jan 20 01:57:40.518425 containerd[1641]: time="2026-01-20T01:57:39.942221080Z" level=info msg="container event discarded" container=48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f type=CONTAINER_STARTED_EVENT Jan 20 01:57:40.983745 containerd[1641]: time="2026-01-20T01:57:40.983495682Z" level=info msg="container event discarded" container=0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac type=CONTAINER_CREATED_EVENT Jan 20 01:57:40.983921 containerd[1641]: time="2026-01-20T01:57:40.983789008Z" level=info msg="container event discarded" container=0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac type=CONTAINER_STARTED_EVENT Jan 20 01:57:43.888011 systemd[1]: cri-containerd-24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0.scope: Deactivated successfully. Jan 20 01:57:43.888736 systemd[1]: cri-containerd-24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0.scope: Consumed 15.429s CPU time, 68.2M memory peak, 6M read from disk. Jan 20 01:57:43.911000 audit: BPF prog-id=172 op=LOAD Jan 20 01:57:43.973418 kernel: audit: type=1334 audit(1768874263.911:591): prog-id=172 op=LOAD Jan 20 01:57:44.032467 kernel: audit: type=1334 audit(1768874263.911:592): prog-id=88 op=UNLOAD Jan 20 01:57:44.032547 kernel: audit: type=1334 audit(1768874263.915:593): prog-id=103 op=UNLOAD Jan 20 01:57:44.032830 kernel: audit: type=1334 audit(1768874263.915:594): prog-id=107 op=UNLOAD Jan 20 01:57:43.911000 audit: BPF prog-id=88 op=UNLOAD Jan 20 01:57:43.915000 audit: BPF prog-id=103 op=UNLOAD Jan 20 01:57:43.915000 audit: BPF prog-id=107 op=UNLOAD Jan 20 01:57:44.065399 containerd[1641]: time="2026-01-20T01:57:44.065125744Z" level=info msg="received container exit event container_id:\"24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0\" id:\"24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0\" pid:2865 exit_status:1 exited_at:{seconds:1768874263 nanos:886167894}" Jan 20 01:57:44.203879 kubelet[3041]: E0120 01:57:44.203707 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.95s" Jan 20 01:57:44.302155 kubelet[3041]: E0120 01:57:44.298701 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:44.320546 containerd[1641]: time="2026-01-20T01:57:44.320489395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:44.400692 kubelet[3041]: E0120 01:57:44.398928 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:44.415423 containerd[1641]: time="2026-01-20T01:57:44.402792380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:44.450057 containerd[1641]: time="2026-01-20T01:57:44.426692217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:44.459500 containerd[1641]: time="2026-01-20T01:57:44.459021844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:44.504596 containerd[1641]: time="2026-01-20T01:57:44.495571370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:44.509166 kubelet[3041]: E0120 01:57:44.505740 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:44.509409 containerd[1641]: time="2026-01-20T01:57:44.508013091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:57:44.595321 containerd[1641]: time="2026-01-20T01:57:44.594071274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,}" Jan 20 01:57:44.595321 containerd[1641]: time="2026-01-20T01:57:44.594331630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:44.611269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0-rootfs.mount: Deactivated successfully. Jan 20 01:57:45.650720 containerd[1641]: time="2026-01-20T01:57:45.649566051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:57:46.274945 containerd[1641]: time="2026-01-20T01:57:46.274456793Z" level=info msg="container event discarded" container=641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499 type=CONTAINER_CREATED_EVENT Jan 20 01:57:46.302090 containerd[1641]: time="2026-01-20T01:57:46.286067996Z" level=info msg="container event discarded" container=6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2 type=CONTAINER_CREATED_EVENT Jan 20 01:57:46.302090 containerd[1641]: time="2026-01-20T01:57:46.286176124Z" level=info msg="container event discarded" container=6fff6b6e7c60fc1cd06af5cdc8f12cf17e7a99c4a25b8d97e8607cf5ad37aad2 type=CONTAINER_STARTED_EVENT Jan 20 01:57:46.302090 containerd[1641]: time="2026-01-20T01:57:46.286196411Z" level=info msg="container event discarded" container=24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0 type=CONTAINER_CREATED_EVENT Jan 20 01:57:46.302090 containerd[1641]: time="2026-01-20T01:57:46.286206550Z" level=info msg="container event discarded" container=60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f type=CONTAINER_CREATED_EVENT Jan 20 01:57:46.302090 containerd[1641]: time="2026-01-20T01:57:46.286215957Z" level=info msg="container event discarded" container=641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499 type=CONTAINER_STARTED_EVENT Jan 20 01:57:46.302090 containerd[1641]: time="2026-01-20T01:57:46.286225454Z" level=info msg="container event discarded" container=24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0 type=CONTAINER_STARTED_EVENT Jan 20 01:57:46.302090 containerd[1641]: time="2026-01-20T01:57:46.286238318Z" level=info msg="container event discarded" container=60ab6fc963f09feb83ca7ae3e56f5951a864ea2a2b9354cabddf47509ceca15f type=CONTAINER_STARTED_EVENT Jan 20 01:57:46.821620 kubelet[3041]: I0120 01:57:46.802260 3041 scope.go:117] "RemoveContainer" containerID="24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0" Jan 20 01:57:46.821620 kubelet[3041]: E0120 01:57:46.813547 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:47.602802 containerd[1641]: time="2026-01-20T01:57:47.602745224Z" level=info msg="CreateContainer within sandbox \"0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 01:57:48.941023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911197062.mount: Deactivated successfully. Jan 20 01:57:49.023146 containerd[1641]: time="2026-01-20T01:57:49.003243353Z" level=info msg="Container 6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:57:49.069492 containerd[1641]: time="2026-01-20T01:57:49.062665784Z" level=info msg="CreateContainer within sandbox \"0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6\"" Jan 20 01:57:49.069492 containerd[1641]: time="2026-01-20T01:57:49.068293005Z" level=info msg="StartContainer for \"6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6\"" Jan 20 01:57:49.133443 containerd[1641]: time="2026-01-20T01:57:49.131214692Z" level=info msg="connecting to shim 6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6" address="unix:///run/containerd/s/0a3f5563c8ff7307acacc838c5a88e501fbbb018b2bb68d1db9fb5ba52ad8acb" protocol=ttrpc version=3 Jan 20 01:57:49.710716 containerd[1641]: time="2026-01-20T01:57:49.680134528Z" level=error msg="Failed to destroy network for sandbox \"9a15e34a15fcb58192a0e35df795935dac70f5c4288d37d19e7118ee58854375\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:49.726851 systemd[1]: run-netns-cni\x2da07948dd\x2d20b0\x2d36bc\x2dc8f6\x2d938ccb7cd519.mount: Deactivated successfully. Jan 20 01:57:49.851129 containerd[1641]: time="2026-01-20T01:57:49.791572884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a15e34a15fcb58192a0e35df795935dac70f5c4288d37d19e7118ee58854375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:49.857747 kubelet[3041]: E0120 01:57:49.797815 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a15e34a15fcb58192a0e35df795935dac70f5c4288d37d19e7118ee58854375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:49.857747 kubelet[3041]: E0120 01:57:49.797895 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a15e34a15fcb58192a0e35df795935dac70f5c4288d37d19e7118ee58854375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:57:49.857747 kubelet[3041]: E0120 01:57:49.797923 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a15e34a15fcb58192a0e35df795935dac70f5c4288d37d19e7118ee58854375\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:57:49.876848 kubelet[3041]: E0120 01:57:49.797993 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a15e34a15fcb58192a0e35df795935dac70f5c4288d37d19e7118ee58854375\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:57:51.161897 systemd[1]: Started cri-containerd-6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6.scope - libcontainer container 6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6. Jan 20 01:57:51.495671 containerd[1641]: time="2026-01-20T01:57:51.489954633Z" level=error msg="Failed to destroy network for sandbox \"54f7738b0dcb7097623bbb5c7063d92d4480d8f7c85b3bead786f9c276005b9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:51.532949 systemd[1]: run-netns-cni\x2dde48f65b\x2d6628\x2df0bc\x2dc43b\x2db29ac2e96de7.mount: Deactivated successfully. Jan 20 01:57:51.810558 containerd[1641]: time="2026-01-20T01:57:51.810494906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f7738b0dcb7097623bbb5c7063d92d4480d8f7c85b3bead786f9c276005b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:51.811664 kubelet[3041]: E0120 01:57:51.811092 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f7738b0dcb7097623bbb5c7063d92d4480d8f7c85b3bead786f9c276005b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:51.811664 kubelet[3041]: E0120 01:57:51.811165 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f7738b0dcb7097623bbb5c7063d92d4480d8f7c85b3bead786f9c276005b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:57:51.811664 kubelet[3041]: E0120 01:57:51.811194 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54f7738b0dcb7097623bbb5c7063d92d4480d8f7c85b3bead786f9c276005b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:57:51.832794 kubelet[3041]: E0120 01:57:51.811257 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54f7738b0dcb7097623bbb5c7063d92d4480d8f7c85b3bead786f9c276005b9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:57:51.832940 containerd[1641]: time="2026-01-20T01:57:51.820871474Z" level=error msg="Failed to destroy network for sandbox \"5884b3efe69f590d91d728889abb4d6d902a815f255778a031e32bac2f61199d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:51.876855 systemd[1]: run-netns-cni\x2d0cfa1f0e\x2d20d7\x2d49bc\x2d0b90\x2d3fd89dbc7236.mount: Deactivated successfully. Jan 20 01:57:52.038471 kernel: audit: type=1334 audit(1768874271.992:595): prog-id=173 op=LOAD Jan 20 01:57:51.992000 audit: BPF prog-id=173 op=LOAD Jan 20 01:57:52.038753 containerd[1641]: time="2026-01-20T01:57:52.026426540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5884b3efe69f590d91d728889abb4d6d902a815f255778a031e32bac2f61199d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.056210 kubelet[3041]: E0120 01:57:52.049811 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5884b3efe69f590d91d728889abb4d6d902a815f255778a031e32bac2f61199d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.056210 kubelet[3041]: E0120 01:57:52.049925 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5884b3efe69f590d91d728889abb4d6d902a815f255778a031e32bac2f61199d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:57:52.056210 kubelet[3041]: E0120 01:57:52.049961 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5884b3efe69f590d91d728889abb4d6d902a815f255778a031e32bac2f61199d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:57:52.056554 kubelet[3041]: E0120 01:57:52.053637 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5884b3efe69f590d91d728889abb4d6d902a815f255778a031e32bac2f61199d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:57:52.149224 kernel: audit: type=1334 audit(1768874272.051:596): prog-id=174 op=LOAD Jan 20 01:57:52.149429 kernel: audit: type=1300 audit(1768874272.051:596): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.051000 audit: BPF prog-id=174 op=LOAD Jan 20 01:57:52.051000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.139660 systemd[1]: run-netns-cni\x2d6ac3e50c\x2d9200\x2d0374\x2d3317\x2d2df1f5632f6f.mount: Deactivated successfully. Jan 20 01:57:52.150043 containerd[1641]: time="2026-01-20T01:57:52.111682114Z" level=error msg="Failed to destroy network for sandbox \"1537a9df71fabb4cf088904470a8861939d79399562261913d9931b640f5bb09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.219314 kernel: audit: type=1327 audit(1768874272.051:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.051000 audit: BPF prog-id=174 op=UNLOAD Jan 20 01:57:52.336107 kernel: audit: type=1334 audit(1768874272.051:597): prog-id=174 op=UNLOAD Jan 20 01:57:52.336254 kernel: audit: type=1300 audit(1768874272.051:597): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.336288 kernel: audit: type=1327 audit(1768874272.051:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.051000 audit[5422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.432438 kernel: audit: type=1334 audit(1768874272.051:598): prog-id=175 op=LOAD Jan 20 01:57:52.051000 audit: BPF prog-id=175 op=LOAD Jan 20 01:57:52.432757 containerd[1641]: time="2026-01-20T01:57:52.370464597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1537a9df71fabb4cf088904470a8861939d79399562261913d9931b640f5bb09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.432928 kubelet[3041]: E0120 01:57:52.379485 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1537a9df71fabb4cf088904470a8861939d79399562261913d9931b640f5bb09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.432928 kubelet[3041]: E0120 01:57:52.379934 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1537a9df71fabb4cf088904470a8861939d79399562261913d9931b640f5bb09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:57:52.432928 kubelet[3041]: E0120 01:57:52.388761 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1537a9df71fabb4cf088904470a8861939d79399562261913d9931b640f5bb09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:57:52.441230 kubelet[3041]: E0120 01:57:52.398261 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1537a9df71fabb4cf088904470a8861939d79399562261913d9931b640f5bb09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xgrtg" podUID="888a237a-dea3-4279-b3e9-e88855e903cc" Jan 20 01:57:52.563111 kernel: audit: type=1300 audit(1768874272.051:598): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.051000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.573613 kernel: audit: type=1327 audit(1768874272.051:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.051000 audit: BPF prog-id=176 op=LOAD Jan 20 01:57:52.051000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.051000 audit: BPF prog-id=176 op=UNLOAD Jan 20 01:57:52.051000 audit[5422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.051000 audit: BPF prog-id=175 op=UNLOAD Jan 20 01:57:52.051000 audit[5422]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.051000 audit: BPF prog-id=177 op=LOAD Jan 20 01:57:52.051000 audit[5422]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2725 pid=5422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:57:52.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664363639313939323931376564623831636631633762363665653834 Jan 20 01:57:52.877792 containerd[1641]: time="2026-01-20T01:57:52.872489869Z" level=error msg="Failed to destroy network for sandbox \"1310791bcbecf2858638ac43de7fc0dad2eb0daf6e80acf10e1028f2c39250db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.892429 systemd[1]: run-netns-cni\x2d45e2704d\x2d8d4d\x2d1fdc\x2d273c\x2deaef6a4c6a8d.mount: Deactivated successfully. Jan 20 01:57:52.947747 systemd[1]: run-netns-cni\x2d56e166b4\x2d2605\x2db60d\x2dcf8a\x2d3448497134b8.mount: Deactivated successfully. Jan 20 01:57:52.960072 containerd[1641]: time="2026-01-20T01:57:52.906049527Z" level=error msg="Failed to destroy network for sandbox \"a5e5890587df3d76db1cde8462b3d10e2587f2791f0c76f87a6850124af56aee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.960072 containerd[1641]: time="2026-01-20T01:57:52.950164675Z" level=error msg="Failed to destroy network for sandbox \"8ae30879c50a81d78cd8dbe19923b923bb290dd20eacd4625d713a4c66d6b609\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:52.987887 systemd[1]: run-netns-cni\x2d7f9243d4\x2dda5a\x2d7d77\x2d33ad\x2d97332cb9f0a5.mount: Deactivated successfully. Jan 20 01:57:53.014704 containerd[1641]: time="2026-01-20T01:57:53.014500709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1310791bcbecf2858638ac43de7fc0dad2eb0daf6e80acf10e1028f2c39250db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.023418 kubelet[3041]: E0120 01:57:53.017242 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1310791bcbecf2858638ac43de7fc0dad2eb0daf6e80acf10e1028f2c39250db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.023418 kubelet[3041]: E0120 01:57:53.017314 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1310791bcbecf2858638ac43de7fc0dad2eb0daf6e80acf10e1028f2c39250db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:57:53.023418 kubelet[3041]: E0120 01:57:53.017410 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1310791bcbecf2858638ac43de7fc0dad2eb0daf6e80acf10e1028f2c39250db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:57:53.025138 kubelet[3041]: E0120 01:57:53.017475 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1310791bcbecf2858638ac43de7fc0dad2eb0daf6e80acf10e1028f2c39250db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b47f5fdc6-mt44k" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" Jan 20 01:57:53.112631 containerd[1641]: time="2026-01-20T01:57:53.112444922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae30879c50a81d78cd8dbe19923b923bb290dd20eacd4625d713a4c66d6b609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.116045 containerd[1641]: time="2026-01-20T01:57:53.115988713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e5890587df3d76db1cde8462b3d10e2587f2791f0c76f87a6850124af56aee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.140225 kubelet[3041]: E0120 01:57:53.133172 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae30879c50a81d78cd8dbe19923b923bb290dd20eacd4625d713a4c66d6b609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.140225 kubelet[3041]: E0120 01:57:53.133263 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae30879c50a81d78cd8dbe19923b923bb290dd20eacd4625d713a4c66d6b609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:57:53.140225 kubelet[3041]: E0120 01:57:53.133289 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae30879c50a81d78cd8dbe19923b923bb290dd20eacd4625d713a4c66d6b609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:57:53.140496 kubelet[3041]: E0120 01:57:53.133416 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ae30879c50a81d78cd8dbe19923b923bb290dd20eacd4625d713a4c66d6b609\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:57:53.140496 kubelet[3041]: E0120 01:57:53.136872 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e5890587df3d76db1cde8462b3d10e2587f2791f0c76f87a6850124af56aee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.140496 kubelet[3041]: E0120 01:57:53.136921 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e5890587df3d76db1cde8462b3d10e2587f2791f0c76f87a6850124af56aee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:57:53.140696 kubelet[3041]: E0120 01:57:53.136978 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e5890587df3d76db1cde8462b3d10e2587f2791f0c76f87a6850124af56aee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:57:53.140696 kubelet[3041]: E0120 01:57:53.137043 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5e5890587df3d76db1cde8462b3d10e2587f2791f0c76f87a6850124af56aee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ddv57" podUID="b4e4578c-79c2-452b-9829-4499e381b357" Jan 20 01:57:53.147874 containerd[1641]: time="2026-01-20T01:57:53.147716680Z" level=error msg="Failed to destroy network for sandbox \"99f2a963d4a6b8efef56e19d618daaf809553496d99bfc4986c173967dccf959\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.185838 systemd[1]: run-netns-cni\x2d1cc47c33\x2d0254\x2d71d6\x2d7c54\x2dabd9747b6fbe.mount: Deactivated successfully. Jan 20 01:57:53.243285 containerd[1641]: time="2026-01-20T01:57:53.243235489Z" level=info msg="StartContainer for \"6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6\" returns successfully" Jan 20 01:57:53.314841 containerd[1641]: time="2026-01-20T01:57:53.311044076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f2a963d4a6b8efef56e19d618daaf809553496d99bfc4986c173967dccf959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.355657 kubelet[3041]: E0120 01:57:53.355568 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f2a963d4a6b8efef56e19d618daaf809553496d99bfc4986c173967dccf959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.381691 kubelet[3041]: E0120 01:57:53.361275 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f2a963d4a6b8efef56e19d618daaf809553496d99bfc4986c173967dccf959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:57:53.381691 kubelet[3041]: E0120 01:57:53.361404 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f2a963d4a6b8efef56e19d618daaf809553496d99bfc4986c173967dccf959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:57:53.381691 kubelet[3041]: E0120 01:57:53.361560 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99f2a963d4a6b8efef56e19d618daaf809553496d99bfc4986c173967dccf959\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:57:53.449759 containerd[1641]: time="2026-01-20T01:57:53.443863741Z" level=error msg="Failed to destroy network for sandbox \"461d356903001f37f562ef8acba7106d3bd69a435ea5f9fba4847082c1f73df6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.512877 systemd[1]: run-netns-cni\x2da42dab1e\x2dee2d\x2d0790\x2d1f96\x2d3c7c005d4997.mount: Deactivated successfully. Jan 20 01:57:53.551707 containerd[1641]: time="2026-01-20T01:57:53.544251504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"461d356903001f37f562ef8acba7106d3bd69a435ea5f9fba4847082c1f73df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.552595 kubelet[3041]: E0120 01:57:53.544788 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461d356903001f37f562ef8acba7106d3bd69a435ea5f9fba4847082c1f73df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:57:53.604070 kubelet[3041]: E0120 01:57:53.598678 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461d356903001f37f562ef8acba7106d3bd69a435ea5f9fba4847082c1f73df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:57:53.604070 kubelet[3041]: E0120 01:57:53.598812 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461d356903001f37f562ef8acba7106d3bd69a435ea5f9fba4847082c1f73df6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:57:53.604070 kubelet[3041]: E0120 01:57:53.598914 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"461d356903001f37f562ef8acba7106d3bd69a435ea5f9fba4847082c1f73df6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:57:53.789053 kubelet[3041]: E0120 01:57:53.782832 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:57:54.827377 kubelet[3041]: E0120 01:57:54.825450 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:00.691857 kubelet[3041]: E0120 01:58:00.685162 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.158s" Jan 20 01:58:00.716710 kubelet[3041]: E0120 01:58:00.707961 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:02.605141 containerd[1641]: time="2026-01-20T01:58:02.602003197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:03.540319 containerd[1641]: time="2026-01-20T01:58:03.540084449Z" level=error msg="Failed to destroy network for sandbox \"f242702e3b51d10d2f5c54906fbda1d4d393e35a60924b7a3212552248755a9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:03.576917 kubelet[3041]: E0120 01:58:03.576868 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:03.581512 systemd[1]: run-netns-cni\x2d66717f62\x2d2909\x2d70bb\x2d6b72\x2d93ae9fddf96f.mount: Deactivated successfully. Jan 20 01:58:03.616475 containerd[1641]: time="2026-01-20T01:58:03.615028732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:58:03.650440 containerd[1641]: time="2026-01-20T01:58:03.649858906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f242702e3b51d10d2f5c54906fbda1d4d393e35a60924b7a3212552248755a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:03.665181 kubelet[3041]: E0120 01:58:03.665103 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f242702e3b51d10d2f5c54906fbda1d4d393e35a60924b7a3212552248755a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:03.665524 kubelet[3041]: E0120 01:58:03.665485 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f242702e3b51d10d2f5c54906fbda1d4d393e35a60924b7a3212552248755a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:58:03.665661 kubelet[3041]: E0120 01:58:03.665634 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f242702e3b51d10d2f5c54906fbda1d4d393e35a60924b7a3212552248755a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:58:03.672430 kubelet[3041]: E0120 01:58:03.672142 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f242702e3b51d10d2f5c54906fbda1d4d393e35a60924b7a3212552248755a9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:58:04.583419 containerd[1641]: time="2026-01-20T01:58:04.578922094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:06.468643 containerd[1641]: time="2026-01-20T01:58:06.440850493Z" level=error msg="Failed to destroy network for sandbox \"8789ff13cb08d9cc96ffd5dda7197c346c490ccfaccedb657c2ccfdce60644b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:06.464117 systemd[1]: run-netns-cni\x2d08eeb996\x2d4ba3\x2db597\x2d537c\x2d23ccc4e255ca.mount: Deactivated successfully. Jan 20 01:58:06.487579 containerd[1641]: time="2026-01-20T01:58:06.487418568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8789ff13cb08d9cc96ffd5dda7197c346c490ccfaccedb657c2ccfdce60644b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:06.497872 kubelet[3041]: E0120 01:58:06.496999 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8789ff13cb08d9cc96ffd5dda7197c346c490ccfaccedb657c2ccfdce60644b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:06.497872 kubelet[3041]: E0120 01:58:06.497068 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8789ff13cb08d9cc96ffd5dda7197c346c490ccfaccedb657c2ccfdce60644b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:58:06.497872 kubelet[3041]: E0120 01:58:06.497094 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8789ff13cb08d9cc96ffd5dda7197c346c490ccfaccedb657c2ccfdce60644b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:58:06.498581 kubelet[3041]: E0120 01:58:06.497160 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8789ff13cb08d9cc96ffd5dda7197c346c490ccfaccedb657c2ccfdce60644b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xgrtg" podUID="888a237a-dea3-4279-b3e9-e88855e903cc" Jan 20 01:58:06.602208 containerd[1641]: time="2026-01-20T01:58:06.602088413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:07.097138 containerd[1641]: time="2026-01-20T01:58:07.097064790Z" level=error msg="Failed to destroy network for sandbox \"52dbebd09420da6f6357625e0123291e6e89711277ef6de0b32e20816e3ae3d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:07.169802 systemd[1]: run-netns-cni\x2d4593a372\x2d0916\x2d1f58\x2d9aba\x2d5108b3d4331f.mount: Deactivated successfully. Jan 20 01:58:07.357095 containerd[1641]: time="2026-01-20T01:58:07.342142151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dbebd09420da6f6357625e0123291e6e89711277ef6de0b32e20816e3ae3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:07.365162 kubelet[3041]: E0120 01:58:07.352579 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dbebd09420da6f6357625e0123291e6e89711277ef6de0b32e20816e3ae3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:07.365162 kubelet[3041]: E0120 01:58:07.352763 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dbebd09420da6f6357625e0123291e6e89711277ef6de0b32e20816e3ae3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:58:07.365162 kubelet[3041]: E0120 01:58:07.352786 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52dbebd09420da6f6357625e0123291e6e89711277ef6de0b32e20816e3ae3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:58:07.373556 kubelet[3041]: E0120 01:58:07.352847 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52dbebd09420da6f6357625e0123291e6e89711277ef6de0b32e20816e3ae3d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:58:07.727724 containerd[1641]: time="2026-01-20T01:58:07.722387193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:07.814442 containerd[1641]: time="2026-01-20T01:58:07.814294743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:07.896326 containerd[1641]: time="2026-01-20T01:58:07.896266100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:07.896556 containerd[1641]: time="2026-01-20T01:58:07.896531226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:08.407554 containerd[1641]: time="2026-01-20T01:58:08.387704886Z" level=error msg="Failed to destroy network for sandbox \"4043386e6eff5be9118253666c329a825e39c060e1b3c38af8b52e3791ef1058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:08.395236 systemd[1]: run-netns-cni\x2d03a691f0\x2d9c5c\x2d9386\x2d30f6\x2d53262b037485.mount: Deactivated successfully. Jan 20 01:58:08.555983 kubelet[3041]: E0120 01:58:08.547825 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:08.557047 containerd[1641]: time="2026-01-20T01:58:08.557004819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:58:08.637425 containerd[1641]: time="2026-01-20T01:58:08.637283733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4043386e6eff5be9118253666c329a825e39c060e1b3c38af8b52e3791ef1058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:08.638390 kubelet[3041]: E0120 01:58:08.637995 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4043386e6eff5be9118253666c329a825e39c060e1b3c38af8b52e3791ef1058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:08.640231 kubelet[3041]: E0120 01:58:08.638557 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4043386e6eff5be9118253666c329a825e39c060e1b3c38af8b52e3791ef1058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:58:08.671193 kubelet[3041]: E0120 01:58:08.660700 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4043386e6eff5be9118253666c329a825e39c060e1b3c38af8b52e3791ef1058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:58:08.671193 kubelet[3041]: E0120 01:58:08.661133 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4043386e6eff5be9118253666c329a825e39c060e1b3c38af8b52e3791ef1058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:58:09.843854 kubelet[3041]: E0120 01:58:09.831776 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:10.199520 containerd[1641]: time="2026-01-20T01:58:10.197097316Z" level=error msg="Failed to destroy network for sandbox \"e1b9edca124be4a7f33cabb57912917837130529fa1cbc4201bf583826fc1d96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.232953 systemd[1]: run-netns-cni\x2defb5694e\x2d95d2\x2d1df4\x2d8696\x2d0ef3cc85fba9.mount: Deactivated successfully. Jan 20 01:58:10.299847 containerd[1641]: time="2026-01-20T01:58:10.296136578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b9edca124be4a7f33cabb57912917837130529fa1cbc4201bf583826fc1d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.302501 kubelet[3041]: E0120 01:58:10.296880 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b9edca124be4a7f33cabb57912917837130529fa1cbc4201bf583826fc1d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.302501 kubelet[3041]: E0120 01:58:10.296947 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b9edca124be4a7f33cabb57912917837130529fa1cbc4201bf583826fc1d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:58:10.302501 kubelet[3041]: E0120 01:58:10.299231 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1b9edca124be4a7f33cabb57912917837130529fa1cbc4201bf583826fc1d96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:58:10.302773 kubelet[3041]: E0120 01:58:10.299388 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1b9edca124be4a7f33cabb57912917837130529fa1cbc4201bf583826fc1d96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:58:10.349568 containerd[1641]: time="2026-01-20T01:58:10.349283821Z" level=error msg="Failed to destroy network for sandbox \"8953c0bd202dc7786a3f6bccc34fb5f6c7bbd45ca74bced6bc05079a9ea5694b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.360166 containerd[1641]: time="2026-01-20T01:58:10.359259420Z" level=error msg="Failed to destroy network for sandbox \"df5f3f74e60767fec092c8bab439b5c2aa7f88dfc82d3714c717953a66b16f22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.366630 systemd[1]: run-netns-cni\x2d5ed516bd\x2d6a27\x2d8bc6\x2d7526\x2d1f371bd78be0.mount: Deactivated successfully. Jan 20 01:58:10.380806 systemd[1]: run-netns-cni\x2da4fb022c\x2d8ced\x2dd0c5\x2d3030\x2d21fc0938fd7f.mount: Deactivated successfully. Jan 20 01:58:10.388321 containerd[1641]: time="2026-01-20T01:58:10.386223919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8953c0bd202dc7786a3f6bccc34fb5f6c7bbd45ca74bced6bc05079a9ea5694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.400863 kubelet[3041]: E0120 01:58:10.394450 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8953c0bd202dc7786a3f6bccc34fb5f6c7bbd45ca74bced6bc05079a9ea5694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.400863 kubelet[3041]: E0120 01:58:10.394576 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8953c0bd202dc7786a3f6bccc34fb5f6c7bbd45ca74bced6bc05079a9ea5694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:58:10.400863 kubelet[3041]: E0120 01:58:10.394604 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8953c0bd202dc7786a3f6bccc34fb5f6c7bbd45ca74bced6bc05079a9ea5694b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:58:10.403192 kubelet[3041]: E0120 01:58:10.395234 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8953c0bd202dc7786a3f6bccc34fb5f6c7bbd45ca74bced6bc05079a9ea5694b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b47f5fdc6-mt44k" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" Jan 20 01:58:10.426480 containerd[1641]: time="2026-01-20T01:58:10.424974150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5f3f74e60767fec092c8bab439b5c2aa7f88dfc82d3714c717953a66b16f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.433381 kubelet[3041]: E0120 01:58:10.430059 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5f3f74e60767fec092c8bab439b5c2aa7f88dfc82d3714c717953a66b16f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.433381 kubelet[3041]: E0120 01:58:10.430245 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5f3f74e60767fec092c8bab439b5c2aa7f88dfc82d3714c717953a66b16f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:58:10.433381 kubelet[3041]: E0120 01:58:10.430271 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df5f3f74e60767fec092c8bab439b5c2aa7f88dfc82d3714c717953a66b16f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:58:10.433582 kubelet[3041]: E0120 01:58:10.431303 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df5f3f74e60767fec092c8bab439b5c2aa7f88dfc82d3714c717953a66b16f22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:58:10.510515 containerd[1641]: time="2026-01-20T01:58:10.510298980Z" level=error msg="Failed to destroy network for sandbox \"5637574306fcef032e2b2ad974329d666393333964435c32aafbdcb53d881864\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.526011 systemd[1]: run-netns-cni\x2d58768dae\x2d67b5\x2d958a\x2d3b67\x2dfb5a1aecfdae.mount: Deactivated successfully. Jan 20 01:58:10.550016 containerd[1641]: time="2026-01-20T01:58:10.549957667Z" level=error msg="Failed to destroy network for sandbox \"331eb9a05620e90cb7df3a06f63d8ef3c519aea391f2d94058d72676fc79e2b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.567786 containerd[1641]: time="2026-01-20T01:58:10.565575800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5637574306fcef032e2b2ad974329d666393333964435c32aafbdcb53d881864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.568013 kubelet[3041]: E0120 01:58:10.567125 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5637574306fcef032e2b2ad974329d666393333964435c32aafbdcb53d881864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.568013 kubelet[3041]: E0120 01:58:10.567252 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5637574306fcef032e2b2ad974329d666393333964435c32aafbdcb53d881864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:58:10.568013 kubelet[3041]: E0120 01:58:10.567323 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5637574306fcef032e2b2ad974329d666393333964435c32aafbdcb53d881864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:58:10.568197 kubelet[3041]: E0120 01:58:10.567639 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5637574306fcef032e2b2ad974329d666393333964435c32aafbdcb53d881864\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:58:10.569068 systemd[1]: run-netns-cni\x2d33abe406\x2dbde0\x2d113d\x2db675\x2db662ada448ad.mount: Deactivated successfully. Jan 20 01:58:10.614067 containerd[1641]: time="2026-01-20T01:58:10.612286867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"331eb9a05620e90cb7df3a06f63d8ef3c519aea391f2d94058d72676fc79e2b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.614306 kubelet[3041]: E0120 01:58:10.613003 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"331eb9a05620e90cb7df3a06f63d8ef3c519aea391f2d94058d72676fc79e2b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:10.614306 kubelet[3041]: E0120 01:58:10.613138 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"331eb9a05620e90cb7df3a06f63d8ef3c519aea391f2d94058d72676fc79e2b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:58:10.614306 kubelet[3041]: E0120 01:58:10.613229 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"331eb9a05620e90cb7df3a06f63d8ef3c519aea391f2d94058d72676fc79e2b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:58:10.614813 kubelet[3041]: E0120 01:58:10.613870 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"331eb9a05620e90cb7df3a06f63d8ef3c519aea391f2d94058d72676fc79e2b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ddv57" podUID="b4e4578c-79c2-452b-9829-4499e381b357" Jan 20 01:58:14.572697 containerd[1641]: time="2026-01-20T01:58:14.563881267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:15.485827 containerd[1641]: time="2026-01-20T01:58:15.485764547Z" level=error msg="Failed to destroy network for sandbox \"dabdeecdab786398aab369d2a17ee371b626b8fd4675193c4798eaecc40ba9dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:15.517059 systemd[1]: run-netns-cni\x2dba86b8ab\x2d5115\x2d538d\x2da055\x2dec1f7b7dedc1.mount: Deactivated successfully. Jan 20 01:58:15.562573 containerd[1641]: time="2026-01-20T01:58:15.548572790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dabdeecdab786398aab369d2a17ee371b626b8fd4675193c4798eaecc40ba9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:15.562855 kubelet[3041]: E0120 01:58:15.548827 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dabdeecdab786398aab369d2a17ee371b626b8fd4675193c4798eaecc40ba9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:15.562855 kubelet[3041]: E0120 01:58:15.548896 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dabdeecdab786398aab369d2a17ee371b626b8fd4675193c4798eaecc40ba9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:58:15.562855 kubelet[3041]: E0120 01:58:15.548925 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dabdeecdab786398aab369d2a17ee371b626b8fd4675193c4798eaecc40ba9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:58:15.564322 kubelet[3041]: E0120 01:58:15.548995 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dabdeecdab786398aab369d2a17ee371b626b8fd4675193c4798eaecc40ba9dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:58:17.594560 kubelet[3041]: E0120 01:58:17.587786 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:17.611646 containerd[1641]: time="2026-01-20T01:58:17.605201058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:58:18.849471 containerd[1641]: time="2026-01-20T01:58:18.848934685Z" level=error msg="Failed to destroy network for sandbox \"80e0ad66077fe0e23cee42981c55cad8ac5eaac66dc013d0c53a0942ebd483dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:18.919156 systemd[1]: run-netns-cni\x2d7ce7f82e\x2d2c5d\x2d6299\x2dcddc\x2df9b2bbae6930.mount: Deactivated successfully. Jan 20 01:58:19.019208 containerd[1641]: time="2026-01-20T01:58:19.017891521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e0ad66077fe0e23cee42981c55cad8ac5eaac66dc013d0c53a0942ebd483dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:19.024767 kubelet[3041]: E0120 01:58:19.021778 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e0ad66077fe0e23cee42981c55cad8ac5eaac66dc013d0c53a0942ebd483dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:19.024767 kubelet[3041]: E0120 01:58:19.021848 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e0ad66077fe0e23cee42981c55cad8ac5eaac66dc013d0c53a0942ebd483dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:58:19.024767 kubelet[3041]: E0120 01:58:19.021874 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80e0ad66077fe0e23cee42981c55cad8ac5eaac66dc013d0c53a0942ebd483dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xgrtg" Jan 20 01:58:19.026171 kubelet[3041]: E0120 01:58:19.021953 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xgrtg_kube-system(888a237a-dea3-4279-b3e9-e88855e903cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80e0ad66077fe0e23cee42981c55cad8ac5eaac66dc013d0c53a0942ebd483dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xgrtg" podUID="888a237a-dea3-4279-b3e9-e88855e903cc" Jan 20 01:58:21.530686 kubelet[3041]: E0120 01:58:21.528563 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:21.606278 containerd[1641]: time="2026-01-20T01:58:21.606231925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:21.712869 containerd[1641]: time="2026-01-20T01:58:21.712811619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:22.421114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546284644.mount: Deactivated successfully. Jan 20 01:58:22.533805 kubelet[3041]: E0120 01:58:22.529132 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:22.600641 containerd[1641]: time="2026-01-20T01:58:22.600536549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:22.909942 containerd[1641]: time="2026-01-20T01:58:22.909888329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:58:22.916557 containerd[1641]: time="2026-01-20T01:58:22.916483376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 01:58:22.971570 containerd[1641]: time="2026-01-20T01:58:22.967594332Z" level=error msg="Failed to destroy network for sandbox \"3f88fe6d3e31c46a13a8de5ee10335be119d8bec021f9443f102a1efd0b44168\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:22.972073 systemd[1]: run-netns-cni\x2da3d73b4c\x2d24ec\x2d25e6\x2d0737\x2d6279aa22375b.mount: Deactivated successfully. Jan 20 01:58:23.037443 containerd[1641]: time="2026-01-20T01:58:23.028027163Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:58:23.087881 containerd[1641]: time="2026-01-20T01:58:23.087814221Z" level=error msg="Failed to destroy network for sandbox \"f1bf40f5232a7be0bd774c33144f361c58581e4a68a2f83cb9284d22b9abebb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:23.109194 systemd[1]: run-netns-cni\x2d888d721b\x2da952\x2d8860\x2d76e5\x2df41c515d833f.mount: Deactivated successfully. Jan 20 01:58:23.110585 containerd[1641]: time="2026-01-20T01:58:23.110451129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f88fe6d3e31c46a13a8de5ee10335be119d8bec021f9443f102a1efd0b44168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:23.111326 kubelet[3041]: E0120 01:58:23.111123 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f88fe6d3e31c46a13a8de5ee10335be119d8bec021f9443f102a1efd0b44168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:23.111326 kubelet[3041]: E0120 01:58:23.111206 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f88fe6d3e31c46a13a8de5ee10335be119d8bec021f9443f102a1efd0b44168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:58:23.111326 kubelet[3041]: E0120 01:58:23.111233 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f88fe6d3e31c46a13a8de5ee10335be119d8bec021f9443f102a1efd0b44168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9gv2m" Jan 20 01:58:23.112989 kubelet[3041]: E0120 01:58:23.112947 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f88fe6d3e31c46a13a8de5ee10335be119d8bec021f9443f102a1efd0b44168\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:58:23.138078 containerd[1641]: time="2026-01-20T01:58:23.138003484Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bf40f5232a7be0bd774c33144f361c58581e4a68a2f83cb9284d22b9abebb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:23.139397 kubelet[3041]: E0120 01:58:23.139144 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bf40f5232a7be0bd774c33144f361c58581e4a68a2f83cb9284d22b9abebb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:23.142202 kubelet[3041]: E0120 01:58:23.142169 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bf40f5232a7be0bd774c33144f361c58581e4a68a2f83cb9284d22b9abebb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:58:23.146077 kubelet[3041]: E0120 01:58:23.145967 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bf40f5232a7be0bd774c33144f361c58581e4a68a2f83cb9284d22b9abebb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" Jan 20 01:58:23.146677 kubelet[3041]: E0120 01:58:23.146559 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1bf40f5232a7be0bd774c33144f361c58581e4a68a2f83cb9284d22b9abebb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:58:23.218239 containerd[1641]: time="2026-01-20T01:58:23.207008342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:58:23.218239 containerd[1641]: time="2026-01-20T01:58:23.213096588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 1m55.40901795s" Jan 20 01:58:23.218239 containerd[1641]: time="2026-01-20T01:58:23.213330198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 01:58:23.746648 containerd[1641]: time="2026-01-20T01:58:23.696802710Z" level=info msg="CreateContainer within sandbox \"311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:58:23.972073 containerd[1641]: time="2026-01-20T01:58:23.972020196Z" level=info msg="Container 7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:58:24.004254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764658391.mount: Deactivated successfully. Jan 20 01:58:24.124599 containerd[1641]: time="2026-01-20T01:58:24.120741170Z" level=info msg="CreateContainer within sandbox \"311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda\"" Jan 20 01:58:24.197412 containerd[1641]: time="2026-01-20T01:58:24.163979100Z" level=info msg="StartContainer for \"7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda\"" Jan 20 01:58:24.197412 containerd[1641]: time="2026-01-20T01:58:24.173583203Z" level=info msg="connecting to shim 7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda" address="unix:///run/containerd/s/f1ce25e1af03caa5247a82fb35c320afb52c8a8db9cfefbf21616e96e20aec4d" protocol=ttrpc version=3 Jan 20 01:58:24.566080 containerd[1641]: time="2026-01-20T01:58:24.563872459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:24.614725 kubelet[3041]: E0120 01:58:24.610983 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:24.625390 containerd[1641]: time="2026-01-20T01:58:24.624908349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:58:24.778011 containerd[1641]: time="2026-01-20T01:58:24.774842743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:24.867315 containerd[1641]: time="2026-01-20T01:58:24.867009604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:25.090861 containerd[1641]: time="2026-01-20T01:58:25.090805832Z" level=error msg="Failed to destroy network for sandbox \"305cbd1258109cafb79c3743b72d99874731cb3f61154cd8c8ae230038069765\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:25.175838 systemd[1]: run-netns-cni\x2dc372bc13\x2d1a63\x2d621e\x2dce3b\x2d48e9f12c3a5b.mount: Deactivated successfully. Jan 20 01:58:25.310119 systemd[1]: Started cri-containerd-7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda.scope - libcontainer container 7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda. Jan 20 01:58:25.589963 containerd[1641]: time="2026-01-20T01:58:25.555051226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"305cbd1258109cafb79c3743b72d99874731cb3f61154cd8c8ae230038069765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:25.590431 kubelet[3041]: E0120 01:58:25.571681 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305cbd1258109cafb79c3743b72d99874731cb3f61154cd8c8ae230038069765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:25.590431 kubelet[3041]: E0120 01:58:25.571752 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305cbd1258109cafb79c3743b72d99874731cb3f61154cd8c8ae230038069765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:58:25.590431 kubelet[3041]: E0120 01:58:25.571980 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305cbd1258109cafb79c3743b72d99874731cb3f61154cd8c8ae230038069765\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-pxsrr" Jan 20 01:58:25.590642 kubelet[3041]: E0120 01:58:25.572058 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"305cbd1258109cafb79c3743b72d99874731cb3f61154cd8c8ae230038069765\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:58:27.514795 containerd[1641]: time="2026-01-20T01:58:27.514615220Z" level=error msg="Failed to destroy network for sandbox \"5afacf88c622a792bdce1c3364e86df149928908e41542365512d469ba31f254\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.541376 systemd[1]: run-netns-cni\x2d04af7565\x2d2ae3\x2d0717\x2d2ae5\x2d8d72cd7dc202.mount: Deactivated successfully. Jan 20 01:58:27.620149 containerd[1641]: time="2026-01-20T01:58:27.603636273Z" level=error msg="Failed to destroy network for sandbox \"0bf1604d359b99fbc0cb50b6e20b533649a7de2331511e534e8547edd09e7913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.648796 containerd[1641]: time="2026-01-20T01:58:27.639162328Z" level=error msg="get state for 7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda" error="context deadline exceeded" Jan 20 01:58:27.648796 containerd[1641]: time="2026-01-20T01:58:27.639261731Z" level=warning msg="unknown status" status=0 Jan 20 01:58:27.639502 systemd[1]: run-netns-cni\x2d0d577385\x2d7d33\x2de685\x2d8b92\x2d91100d5f9460.mount: Deactivated successfully. Jan 20 01:58:27.668102 containerd[1641]: time="2026-01-20T01:58:27.667929538Z" level=error msg="Failed to destroy network for sandbox \"5f897d4e1f43fb4594c7ff638acdab0a770ad93308f7877d29c31eeaa814cb34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.701946 systemd[1]: run-netns-cni\x2d3b0c3c42\x2d29c4\x2d5b93\x2d8a9c\x2d7304a9a0ba6c.mount: Deactivated successfully. Jan 20 01:58:27.882405 containerd[1641]: time="2026-01-20T01:58:27.881778243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b47f5fdc6-mt44k,Uid:ac58d15d-4067-466e-a772-59e3b5476a8c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5afacf88c622a792bdce1c3364e86df149928908e41542365512d469ba31f254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.905922 kubelet[3041]: E0120 01:58:27.885433 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5afacf88c622a792bdce1c3364e86df149928908e41542365512d469ba31f254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.905922 kubelet[3041]: E0120 01:58:27.885503 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5afacf88c622a792bdce1c3364e86df149928908e41542365512d469ba31f254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:58:27.905922 kubelet[3041]: E0120 01:58:27.885527 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5afacf88c622a792bdce1c3364e86df149928908e41542365512d469ba31f254\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b47f5fdc6-mt44k" Jan 20 01:58:27.980075 kubelet[3041]: E0120 01:58:27.885589 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b47f5fdc6-mt44k_calico-system(ac58d15d-4067-466e-a772-59e3b5476a8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5afacf88c622a792bdce1c3364e86df149928908e41542365512d469ba31f254\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b47f5fdc6-mt44k" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" Jan 20 01:58:27.980075 kubelet[3041]: E0120 01:58:27.947704 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f897d4e1f43fb4594c7ff638acdab0a770ad93308f7877d29c31eeaa814cb34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.980075 kubelet[3041]: E0120 01:58:27.947776 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f897d4e1f43fb4594c7ff638acdab0a770ad93308f7877d29c31eeaa814cb34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:58:27.986719 containerd[1641]: time="2026-01-20T01:58:27.938137050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f897d4e1f43fb4594c7ff638acdab0a770ad93308f7877d29c31eeaa814cb34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.986719 containerd[1641]: time="2026-01-20T01:58:27.969991430Z" level=error msg="Failed to destroy network for sandbox \"45908eee5ab105b94494bd74fec62cc8b365fb32043ac0f499ce56d1c465f84c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:27.986910 kubelet[3041]: E0120 01:58:27.947805 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f897d4e1f43fb4594c7ff638acdab0a770ad93308f7877d29c31eeaa814cb34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" Jan 20 01:58:27.986910 kubelet[3041]: E0120 01:58:27.947875 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f897d4e1f43fb4594c7ff638acdab0a770ad93308f7877d29c31eeaa814cb34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:58:27.994496 systemd[1]: run-netns-cni\x2d4ab63c4a\x2d6f8d\x2daa52\x2ddef8\x2d3cd4b42c54dc.mount: Deactivated successfully. Jan 20 01:58:28.000413 containerd[1641]: time="2026-01-20T01:58:27.999628350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf1604d359b99fbc0cb50b6e20b533649a7de2331511e534e8547edd09e7913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:28.000715 kubelet[3041]: E0120 01:58:28.000124 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf1604d359b99fbc0cb50b6e20b533649a7de2331511e534e8547edd09e7913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:28.000715 kubelet[3041]: E0120 01:58:28.000237 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf1604d359b99fbc0cb50b6e20b533649a7de2331511e534e8547edd09e7913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:58:28.000715 kubelet[3041]: E0120 01:58:28.000268 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf1604d359b99fbc0cb50b6e20b533649a7de2331511e534e8547edd09e7913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" Jan 20 01:58:28.005607 kubelet[3041]: E0120 01:58:28.001071 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bf1604d359b99fbc0cb50b6e20b533649a7de2331511e534e8547edd09e7913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:58:28.122006 containerd[1641]: time="2026-01-20T01:58:28.121256796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45908eee5ab105b94494bd74fec62cc8b365fb32043ac0f499ce56d1c465f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:28.152686 kubelet[3041]: E0120 01:58:28.144041 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45908eee5ab105b94494bd74fec62cc8b365fb32043ac0f499ce56d1c465f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:28.152686 kubelet[3041]: E0120 01:58:28.144285 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45908eee5ab105b94494bd74fec62cc8b365fb32043ac0f499ce56d1c465f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:58:28.152686 kubelet[3041]: E0120 01:58:28.144315 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45908eee5ab105b94494bd74fec62cc8b365fb32043ac0f499ce56d1c465f84c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ddv57" Jan 20 01:58:28.152923 kubelet[3041]: E0120 01:58:28.144533 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ddv57_kube-system(b4e4578c-79c2-452b-9829-4499e381b357)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45908eee5ab105b94494bd74fec62cc8b365fb32043ac0f499ce56d1c465f84c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ddv57" podUID="b4e4578c-79c2-452b-9829-4499e381b357" Jan 20 01:58:28.144000 audit: BPF prog-id=178 op=LOAD Jan 20 01:58:28.177740 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 20 01:58:28.177918 kernel: audit: type=1334 audit(1768874308.144:603): prog-id=178 op=LOAD Jan 20 01:58:28.144000 audit[5987]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.257145 kernel: audit: type=1300 audit(1768874308.144:603): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.257644 kernel: audit: type=1327 audit(1768874308.144:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.333374 kernel: audit: type=1334 audit(1768874308.144:604): prog-id=179 op=LOAD Jan 20 01:58:28.144000 audit: BPF prog-id=179 op=LOAD Jan 20 01:58:28.357678 kernel: audit: type=1300 audit(1768874308.144:604): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.144000 audit[5987]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.467088 kernel: audit: type=1327 audit(1768874308.144:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.467285 kernel: audit: type=1334 audit(1768874308.214:605): prog-id=179 op=UNLOAD Jan 20 01:58:28.214000 audit: BPF prog-id=179 op=UNLOAD Jan 20 01:58:28.214000 audit[5987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.499441 kernel: audit: type=1300 audit(1768874308.214:605): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.499507 kernel: audit: type=1327 audit(1768874308.214:605): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.578673 kernel: audit: type=1334 audit(1768874308.215:606): prog-id=178 op=UNLOAD Jan 20 01:58:28.215000 audit: BPF prog-id=178 op=UNLOAD Jan 20 01:58:28.215000 audit[5987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.215000 audit: BPF prog-id=180 op=LOAD Jan 20 01:58:28.215000 audit[5987]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3606 pid=5987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:28.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765326436626362353161663033313266643862623635623432626431 Jan 20 01:58:28.722817 containerd[1641]: time="2026-01-20T01:58:28.722070821Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:58:29.183426 containerd[1641]: time="2026-01-20T01:58:29.182053764Z" level=info msg="StartContainer for \"7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda\" returns successfully" Jan 20 01:58:29.592034 containerd[1641]: time="2026-01-20T01:58:29.591702172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:30.107454 kubelet[3041]: E0120 01:58:30.065085 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:31.094903 kubelet[3041]: E0120 01:58:31.082538 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:31.368433 containerd[1641]: time="2026-01-20T01:58:31.337116242Z" level=error msg="Failed to destroy network for sandbox \"e8eca4f0d788becffad7a62388b0fd86540c6c8eea9c4dd19ca5753483c0309a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:31.380454 systemd[1]: run-netns-cni\x2d94c4d9cf\x2d546c\x2db9f1\x2d6fac\x2de3816004a22a.mount: Deactivated successfully. Jan 20 01:58:31.830301 containerd[1641]: time="2026-01-20T01:58:31.829276955Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8eca4f0d788becffad7a62388b0fd86540c6c8eea9c4dd19ca5753483c0309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:31.843043 kubelet[3041]: E0120 01:58:31.834963 3041 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8eca4f0d788becffad7a62388b0fd86540c6c8eea9c4dd19ca5753483c0309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:58:31.843043 kubelet[3041]: E0120 01:58:31.835290 3041 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8eca4f0d788becffad7a62388b0fd86540c6c8eea9c4dd19ca5753483c0309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:58:31.843043 kubelet[3041]: E0120 01:58:31.835330 3041 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8eca4f0d788becffad7a62388b0fd86540c6c8eea9c4dd19ca5753483c0309a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" Jan 20 01:58:31.912503 kubelet[3041]: E0120 01:58:31.835554 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8eca4f0d788becffad7a62388b0fd86540c6c8eea9c4dd19ca5753483c0309a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:58:32.341886 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:58:32.342424 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:58:33.531395 kubelet[3041]: E0120 01:58:33.531131 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:34.346719 kubelet[3041]: I0120 01:58:34.345456 3041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p5ws6" podStartSLOduration=14.163473601 podStartE2EDuration="3m1.345412859s" podCreationTimestamp="2026-01-20 01:55:33 +0000 UTC" firstStartedPulling="2026-01-20 01:55:36.051220765 +0000 UTC m=+122.055909661" lastFinishedPulling="2026-01-20 01:58:23.233160023 +0000 UTC m=+289.237848919" observedRunningTime="2026-01-20 01:58:30.569866364 +0000 UTC m=+296.574555260" watchObservedRunningTime="2026-01-20 01:58:34.345412859 +0000 UTC m=+300.350101775" Jan 20 01:58:34.516458 kubelet[3041]: I0120 01:58:34.507700 3041 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-ca-bundle\") pod \"ac58d15d-4067-466e-a772-59e3b5476a8c\" (UID: \"ac58d15d-4067-466e-a772-59e3b5476a8c\") " Jan 20 01:58:34.516458 kubelet[3041]: I0120 01:58:34.513182 3041 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5965d\" (UniqueName: \"kubernetes.io/projected/ac58d15d-4067-466e-a772-59e3b5476a8c-kube-api-access-5965d\") pod \"ac58d15d-4067-466e-a772-59e3b5476a8c\" (UID: \"ac58d15d-4067-466e-a772-59e3b5476a8c\") " Jan 20 01:58:34.537303 kubelet[3041]: I0120 01:58:34.525497 3041 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ac58d15d-4067-466e-a772-59e3b5476a8c" (UID: "ac58d15d-4067-466e-a772-59e3b5476a8c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:58:34.591268 kubelet[3041]: E0120 01:58:34.588751 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:34.602479 containerd[1641]: time="2026-01-20T01:58:34.600203683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,}" Jan 20 01:58:34.637677 kubelet[3041]: I0120 01:58:34.629494 3041 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-backend-key-pair\") pod \"ac58d15d-4067-466e-a772-59e3b5476a8c\" (UID: \"ac58d15d-4067-466e-a772-59e3b5476a8c\") " Jan 20 01:58:34.637677 kubelet[3041]: I0120 01:58:34.629658 3041 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 20 01:58:34.637677 kubelet[3041]: I0120 01:58:34.631848 3041 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac58d15d-4067-466e-a772-59e3b5476a8c-kube-api-access-5965d" (OuterVolumeSpecName: "kube-api-access-5965d") pod "ac58d15d-4067-466e-a772-59e3b5476a8c" (UID: "ac58d15d-4067-466e-a772-59e3b5476a8c"). InnerVolumeSpecName "kube-api-access-5965d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:58:34.642511 systemd[1]: var-lib-kubelet-pods-ac58d15d\x2d4067\x2d466e\x2da772\x2d59e3b5476a8c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5965d.mount: Deactivated successfully. Jan 20 01:58:34.737428 kubelet[3041]: I0120 01:58:34.732629 3041 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5965d\" (UniqueName: \"kubernetes.io/projected/ac58d15d-4067-466e-a772-59e3b5476a8c-kube-api-access-5965d\") on node \"localhost\" DevicePath \"\"" Jan 20 01:58:34.842005 systemd[1]: var-lib-kubelet-pods-ac58d15d\x2d4067\x2d466e\x2da772\x2d59e3b5476a8c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:58:34.880884 kubelet[3041]: I0120 01:58:34.880415 3041 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ac58d15d-4067-466e-a772-59e3b5476a8c" (UID: "ac58d15d-4067-466e-a772-59e3b5476a8c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:58:35.011439 kubelet[3041]: I0120 01:58:35.011311 3041 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac58d15d-4067-466e-a772-59e3b5476a8c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 20 01:58:35.732187 systemd[1]: Removed slice kubepods-besteffort-podac58d15d_4067_466e_a772_59e3b5476a8c.slice - libcontainer container kubepods-besteffort-podac58d15d_4067_466e_a772_59e3b5476a8c.slice. Jan 20 01:58:36.536255 kubelet[3041]: E0120 01:58:36.533404 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:37.606599 containerd[1641]: time="2026-01-20T01:58:37.600908577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:37.649084 kubelet[3041]: I0120 01:58:37.635702 3041 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac58d15d-4067-466e-a772-59e3b5476a8c" path="/var/lib/kubelet/pods/ac58d15d-4067-466e-a772-59e3b5476a8c/volumes" Jan 20 01:58:37.792395 kubelet[3041]: I0120 01:58:37.782322 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ce5e731-d9ff-4094-add4-a475d64c6d24-whisker-backend-key-pair\") pod \"whisker-85cc9877c8-b9ftn\" (UID: \"0ce5e731-d9ff-4094-add4-a475d64c6d24\") " pod="calico-system/whisker-85cc9877c8-b9ftn" Jan 20 01:58:37.792395 kubelet[3041]: I0120 01:58:37.782455 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ce5e731-d9ff-4094-add4-a475d64c6d24-whisker-ca-bundle\") pod \"whisker-85cc9877c8-b9ftn\" (UID: \"0ce5e731-d9ff-4094-add4-a475d64c6d24\") " pod="calico-system/whisker-85cc9877c8-b9ftn" Jan 20 01:58:37.792395 kubelet[3041]: I0120 01:58:37.782495 3041 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97r9\" (UniqueName: \"kubernetes.io/projected/0ce5e731-d9ff-4094-add4-a475d64c6d24-kube-api-access-b97r9\") pod \"whisker-85cc9877c8-b9ftn\" (UID: \"0ce5e731-d9ff-4094-add4-a475d64c6d24\") " pod="calico-system/whisker-85cc9877c8-b9ftn" Jan 20 01:58:37.836625 containerd[1641]: time="2026-01-20T01:58:37.836552808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:38.088900 systemd[1]: Created slice kubepods-besteffort-pod0ce5e731_d9ff_4094_add4_a475d64c6d24.slice - libcontainer container kubepods-besteffort-pod0ce5e731_d9ff_4094_add4_a475d64c6d24.slice. Jan 20 01:58:38.715764 containerd[1641]: time="2026-01-20T01:58:38.714739599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:38.807702 containerd[1641]: time="2026-01-20T01:58:38.807641789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:39.026149 containerd[1641]: time="2026-01-20T01:58:39.026033098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cc9877c8-b9ftn,Uid:0ce5e731-d9ff-4094-add4-a475d64c6d24,Namespace:calico-system,Attempt:0,}" Jan 20 01:58:39.934904 containerd[1641]: time="2026-01-20T01:58:39.933103983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:43.581020 kubelet[3041]: E0120 01:58:43.579280 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:43.632857 containerd[1641]: time="2026-01-20T01:58:43.580831257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,}" Jan 20 01:58:43.872562 systemd-networkd[1543]: cali8068c29f0c2: Link UP Jan 20 01:58:43.898482 systemd-networkd[1543]: cali8068c29f0c2: Gained carrier Jan 20 01:58:44.596881 containerd[1641]: time="2026-01-20T01:58:44.596829715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:58:44.821457 containerd[1641]: 2026-01-20 01:58:40.602 [INFO][6290] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:44.821457 containerd[1641]: 2026-01-20 01:58:40.889 [INFO][6290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0 calico-apiserver-6576c69f97- calico-apiserver 15d966be-bae7-42a3-83b7-ced10b64bcb2 1222 0 2026-01-20 01:54:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6576c69f97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6576c69f97-s8m7m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8068c29f0c2 [] [] }} ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-" Jan 20 01:58:44.821457 containerd[1641]: 2026-01-20 01:58:40.889 [INFO][6290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" Jan 20 01:58:44.821457 containerd[1641]: 2026-01-20 01:58:42.571 [INFO][6392] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" HandleID="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Workload="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:42.572 [INFO][6392] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" HandleID="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Workload="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139d30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6576c69f97-s8m7m", "timestamp":"2026-01-20 01:58:42.571857931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:42.572 [INFO][6392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:42.572 [INFO][6392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:42.649 [INFO][6392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:42.831 [INFO][6392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" host="localhost" Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:43.000 [INFO][6392] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:43.078 [INFO][6392] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:43.108 [INFO][6392] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:43.136 [INFO][6392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:44.997453 containerd[1641]: 2026-01-20 01:58:43.136 [INFO][6392] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" host="localhost" Jan 20 01:58:44.999173 containerd[1641]: 2026-01-20 01:58:43.156 [INFO][6392] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a Jan 20 01:58:44.999173 containerd[1641]: 2026-01-20 01:58:43.202 [INFO][6392] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" host="localhost" Jan 20 01:58:44.999173 containerd[1641]: 2026-01-20 01:58:43.267 [INFO][6392] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" host="localhost" Jan 20 01:58:44.999173 containerd[1641]: 2026-01-20 01:58:43.267 [INFO][6392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" host="localhost" Jan 20 01:58:44.999173 containerd[1641]: 2026-01-20 01:58:43.268 [INFO][6392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:44.999173 containerd[1641]: 2026-01-20 01:58:43.268 [INFO][6392] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" HandleID="k8s-pod-network.7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Workload="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" Jan 20 01:58:44.999438 containerd[1641]: 2026-01-20 01:58:43.346 [INFO][6290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0", GenerateName:"calico-apiserver-6576c69f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"15d966be-bae7-42a3-83b7-ced10b64bcb2", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6576c69f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6576c69f97-s8m7m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8068c29f0c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:44.999616 containerd[1641]: 2026-01-20 01:58:43.346 [INFO][6290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" Jan 20 01:58:44.999616 containerd[1641]: 2026-01-20 01:58:43.346 [INFO][6290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8068c29f0c2 ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" Jan 20 01:58:44.999616 containerd[1641]: 2026-01-20 01:58:43.936 [INFO][6290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" Jan 20 01:58:44.999726 containerd[1641]: 2026-01-20 01:58:44.160 [INFO][6290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0", GenerateName:"calico-apiserver-6576c69f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"15d966be-bae7-42a3-83b7-ced10b64bcb2", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6576c69f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a", Pod:"calico-apiserver-6576c69f97-s8m7m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8068c29f0c2", MAC:"b2:d6:37:9b:63:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:44.999836 containerd[1641]: 2026-01-20 01:58:44.671 [INFO][6290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-s8m7m" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--s8m7m-eth0" Jan 20 01:58:45.673448 systemd-networkd[1543]: cali9ed7ed9aa29: Link UP Jan 20 01:58:45.798328 systemd-networkd[1543]: cali9ed7ed9aa29: Gained carrier Jan 20 01:58:45.957456 systemd-networkd[1543]: cali8068c29f0c2: Gained IPv6LL Jan 20 01:58:46.365973 containerd[1641]: 2026-01-20 01:58:38.882 [INFO][6279] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:46.365973 containerd[1641]: 2026-01-20 01:58:39.414 [INFO][6279] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9gv2m-eth0 csi-node-driver- calico-system ac1c9092-8cef-4868-9089-0927692efc39 990 0 2026-01-20 01:55:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9gv2m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9ed7ed9aa29 [] [] }} ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-" Jan 20 01:58:46.365973 containerd[1641]: 2026-01-20 01:58:39.415 [INFO][6279] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-eth0" Jan 20 01:58:46.365973 containerd[1641]: 2026-01-20 01:58:42.587 [INFO][6350] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" HandleID="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Workload="localhost-k8s-csi--node--driver--9gv2m-eth0" Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:42.588 [INFO][6350] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" HandleID="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Workload="localhost-k8s-csi--node--driver--9gv2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9gv2m", "timestamp":"2026-01-20 01:58:42.587700127 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:42.588 [INFO][6350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.269 [INFO][6350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.269 [INFO][6350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.374 [INFO][6350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" host="localhost" Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.413 [INFO][6350] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.487 [INFO][6350] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.504 [INFO][6350] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.543 [INFO][6350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:46.366828 containerd[1641]: 2026-01-20 01:58:43.543 [INFO][6350] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" host="localhost" Jan 20 01:58:46.389484 containerd[1641]: 2026-01-20 01:58:43.611 [INFO][6350] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204 Jan 20 01:58:46.389484 containerd[1641]: 2026-01-20 01:58:43.884 [INFO][6350] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" host="localhost" Jan 20 01:58:46.389484 containerd[1641]: 2026-01-20 01:58:44.376 [INFO][6350] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" host="localhost" Jan 20 01:58:46.389484 containerd[1641]: 2026-01-20 01:58:44.376 [INFO][6350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" host="localhost" Jan 20 01:58:46.389484 containerd[1641]: 2026-01-20 01:58:44.376 [INFO][6350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:46.389484 containerd[1641]: 2026-01-20 01:58:44.376 [INFO][6350] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" HandleID="k8s-pod-network.79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Workload="localhost-k8s-csi--node--driver--9gv2m-eth0" Jan 20 01:58:46.389716 containerd[1641]: 2026-01-20 01:58:45.648 [INFO][6279] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gv2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac1c9092-8cef-4868-9089-0927692efc39", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9gv2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ed7ed9aa29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:46.389860 containerd[1641]: 2026-01-20 01:58:45.649 [INFO][6279] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-eth0" Jan 20 01:58:46.389860 containerd[1641]: 2026-01-20 01:58:45.649 [INFO][6279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ed7ed9aa29 ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-eth0" Jan 20 01:58:46.389860 containerd[1641]: 2026-01-20 01:58:45.792 [INFO][6279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-eth0" Jan 20 01:58:46.403600 containerd[1641]: 2026-01-20 01:58:45.818 [INFO][6279] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9gv2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ac1c9092-8cef-4868-9089-0927692efc39", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 55, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204", Pod:"csi-node-driver-9gv2m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9ed7ed9aa29", MAC:"4e:fd:44:82:a2:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:46.403759 containerd[1641]: 2026-01-20 01:58:46.247 [INFO][6279] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" Namespace="calico-system" Pod="csi-node-driver-9gv2m" WorkloadEndpoint="localhost-k8s-csi--node--driver--9gv2m-eth0" Jan 20 01:58:47.499246 systemd-networkd[1543]: cali9ed7ed9aa29: Gained IPv6LL Jan 20 01:58:47.605428 systemd-networkd[1543]: cali444cd502dc6: Link UP Jan 20 01:58:47.606635 systemd-networkd[1543]: cali444cd502dc6: Gained carrier Jan 20 01:58:48.107396 containerd[1641]: time="2026-01-20T01:58:48.104019646Z" level=info msg="connecting to shim 79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204" address="unix:///run/containerd/s/145812a295a3b17445765110a32b57493f61665b426c5f25ca6814d16ec38282" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:48.137946 containerd[1641]: time="2026-01-20T01:58:48.130684368Z" level=info msg="connecting to shim 7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a" address="unix:///run/containerd/s/3f26d5246b7c623a22e7b2fe6a911be43ec21eea0b55ee5ba9cd3309046b9819" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:48.333521 containerd[1641]: 2026-01-20 01:58:36.192 [INFO][6255] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:48.333521 containerd[1641]: 2026-01-20 01:58:37.667 [INFO][6255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xgrtg-eth0 coredns-66bc5c9577- kube-system 888a237a-dea3-4279-b3e9-e88855e903cc 1206 0 2026-01-20 01:53:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xgrtg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali444cd502dc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-" Jan 20 01:58:48.333521 containerd[1641]: 2026-01-20 01:58:37.673 [INFO][6255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" Jan 20 01:58:48.333521 containerd[1641]: 2026-01-20 01:58:42.597 [INFO][6303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" HandleID="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Workload="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:42.611 [INFO][6303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" HandleID="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Workload="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b3440), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xgrtg", "timestamp":"2026-01-20 01:58:42.597732972 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:42.611 [INFO][6303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:44.376 [INFO][6303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:44.376 [INFO][6303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:45.374 [INFO][6303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" host="localhost" Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:45.596 [INFO][6303] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:46.215 [INFO][6303] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:46.305 [INFO][6303] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:46.411 [INFO][6303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:48.333960 containerd[1641]: 2026-01-20 01:58:46.411 [INFO][6303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" host="localhost" Jan 20 01:58:48.340941 containerd[1641]: 2026-01-20 01:58:46.454 [INFO][6303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70 Jan 20 01:58:48.340941 containerd[1641]: 2026-01-20 01:58:46.778 [INFO][6303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" host="localhost" Jan 20 01:58:48.340941 containerd[1641]: 2026-01-20 01:58:46.911 [INFO][6303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" host="localhost" Jan 20 01:58:48.340941 containerd[1641]: 2026-01-20 01:58:46.911 [INFO][6303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" host="localhost" Jan 20 01:58:48.340941 containerd[1641]: 2026-01-20 01:58:46.911 [INFO][6303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:48.340941 containerd[1641]: 2026-01-20 01:58:46.911 [INFO][6303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" HandleID="k8s-pod-network.5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Workload="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" Jan 20 01:58:48.341158 containerd[1641]: 2026-01-20 01:58:47.378 [INFO][6255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xgrtg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"888a237a-dea3-4279-b3e9-e88855e903cc", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xgrtg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali444cd502dc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:48.341158 containerd[1641]: 2026-01-20 01:58:47.378 [INFO][6255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" Jan 20 01:58:48.341158 containerd[1641]: 2026-01-20 01:58:47.378 [INFO][6255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali444cd502dc6 ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" Jan 20 01:58:48.341158 containerd[1641]: 2026-01-20 01:58:47.596 [INFO][6255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" Jan 20 01:58:48.341158 containerd[1641]: 2026-01-20 01:58:47.885 [INFO][6255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xgrtg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"888a237a-dea3-4279-b3e9-e88855e903cc", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70", Pod:"coredns-66bc5c9577-xgrtg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali444cd502dc6", MAC:"9e:68:a9:b6:7b:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:48.341158 containerd[1641]: 2026-01-20 01:58:48.218 [INFO][6255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" Namespace="kube-system" Pod="coredns-66bc5c9577-xgrtg" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xgrtg-eth0" Jan 20 01:58:48.769763 containerd[1641]: time="2026-01-20T01:58:48.747697327Z" level=info msg="container event discarded" container=b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102 type=CONTAINER_CREATED_EVENT Jan 20 01:58:48.792061 containerd[1641]: time="2026-01-20T01:58:48.783758914Z" level=info msg="container event discarded" container=b3ceaacb1ce0f3d6917ef5382e91c158934a557b1e4b06f7a3aefc61e70a3102 type=CONTAINER_STARTED_EVENT Jan 20 01:58:48.874389 containerd[1641]: time="2026-01-20T01:58:48.874236465Z" level=info msg="connecting to shim 5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70" address="unix:///run/containerd/s/61b37bb296e910db4cd783a6af26ebd39abcc2dfaf56e804961eed1288bd2494" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:49.241514 systemd-networkd[1543]: calib5830ed241f: Link UP Jan 20 01:58:49.393500 systemd-networkd[1543]: calib5830ed241f: Gained carrier Jan 20 01:58:49.413684 systemd[1]: Started cri-containerd-79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204.scope - libcontainer container 79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204. Jan 20 01:58:49.775911 systemd-networkd[1543]: cali444cd502dc6: Gained IPv6LL Jan 20 01:58:49.949449 systemd[1]: Started cri-containerd-7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a.scope - libcontainer container 7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a. Jan 20 01:58:50.202273 containerd[1641]: time="2026-01-20T01:58:50.200976569Z" level=info msg="container event discarded" container=7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643 type=CONTAINER_CREATED_EVENT Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:40.290 [INFO][6320] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:40.760 [INFO][6320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0 calico-kube-controllers-7cb6ddc686- calico-system ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5 1209 0 2026-01-20 01:55:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cb6ddc686 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cb6ddc686-fcv7l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib5830ed241f [] [] }} ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:40.764 [INFO][6320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:42.546 [INFO][6386] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" HandleID="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Workload="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:42.649 [INFO][6386] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" HandleID="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Workload="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003da420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cb6ddc686-fcv7l", "timestamp":"2026-01-20 01:58:42.546272455 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:42.649 [INFO][6386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:46.914 [INFO][6386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:46.915 [INFO][6386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:47.747 [INFO][6386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.073 [INFO][6386] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.243 [INFO][6386] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.323 [INFO][6386] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.411 [INFO][6386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.411 [INFO][6386] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.578 [INFO][6386] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03 Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.771 [INFO][6386] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.988 [INFO][6386] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.994 [INFO][6386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" host="localhost" Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.994 [INFO][6386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:50.279109 containerd[1641]: 2026-01-20 01:58:48.994 [INFO][6386] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" HandleID="k8s-pod-network.2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Workload="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" Jan 20 01:58:50.323197 containerd[1641]: 2026-01-20 01:58:49.185 [INFO][6320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0", GenerateName:"calico-kube-controllers-7cb6ddc686-", Namespace:"calico-system", SelfLink:"", UID:"ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb6ddc686", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cb6ddc686-fcv7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5830ed241f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:50.323197 containerd[1641]: 2026-01-20 01:58:49.185 [INFO][6320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" Jan 20 01:58:50.323197 containerd[1641]: 2026-01-20 01:58:49.185 [INFO][6320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5830ed241f ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" Jan 20 01:58:50.323197 containerd[1641]: 2026-01-20 01:58:49.436 [INFO][6320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" Jan 20 01:58:50.323197 containerd[1641]: 2026-01-20 01:58:49.438 [INFO][6320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0", GenerateName:"calico-kube-controllers-7cb6ddc686-", Namespace:"calico-system", SelfLink:"", UID:"ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cb6ddc686", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03", Pod:"calico-kube-controllers-7cb6ddc686-fcv7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib5830ed241f", MAC:"56:09:43:b5:92:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:50.323197 containerd[1641]: 2026-01-20 01:58:50.139 [INFO][6320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" Namespace="calico-system" Pod="calico-kube-controllers-7cb6ddc686-fcv7l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cb6ddc686--fcv7l-eth0" Jan 20 01:58:50.577974 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 20 01:58:50.578119 kernel: audit: type=1334 audit(1768874330.553:608): prog-id=181 op=LOAD Jan 20 01:58:50.553000 audit: BPF prog-id=181 op=LOAD Jan 20 01:58:50.574000 audit: BPF prog-id=182 op=LOAD Jan 20 01:58:50.595163 kernel: audit: type=1334 audit(1768874330.574:609): prog-id=182 op=LOAD Jan 20 01:58:50.595304 kernel: audit: type=1300 audit(1768874330.574:609): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit[6666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.629123 kernel: audit: type=1327 audit(1768874330.574:609): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.574000 audit: BPF prog-id=182 op=UNLOAD Jan 20 01:58:50.645014 kernel: audit: type=1334 audit(1768874330.574:610): prog-id=182 op=UNLOAD Jan 20 01:58:50.716515 kernel: audit: type=1300 audit(1768874330.574:610): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit[6666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.821861 kernel: audit: type=1327 audit(1768874330.574:610): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.574000 audit: BPF prog-id=183 op=LOAD Jan 20 01:58:50.880788 systemd[1]: Started cri-containerd-5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70.scope - libcontainer container 5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70. Jan 20 01:58:50.574000 audit[6666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.925318 kernel: audit: type=1334 audit(1768874330.574:611): prog-id=183 op=LOAD Jan 20 01:58:50.925512 kernel: audit: type=1300 audit(1768874330.574:611): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.925556 kernel: audit: type=1327 audit(1768874330.574:611): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.945300 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:50.574000 audit: BPF prog-id=184 op=LOAD Jan 20 01:58:50.574000 audit[6666]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.574000 audit: BPF prog-id=184 op=UNLOAD Jan 20 01:58:50.574000 audit[6666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.574000 audit: BPF prog-id=183 op=UNLOAD Jan 20 01:58:50.574000 audit[6666]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:50.574000 audit: BPF prog-id=185 op=LOAD Jan 20 01:58:50.574000 audit[6666]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=6611 pid=6666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:50.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739636337653034373039333864623032303366663439653535613062 Jan 20 01:58:51.208000 audit: BPF prog-id=186 op=LOAD Jan 20 01:58:51.254000 audit: BPF prog-id=187 op=LOAD Jan 20 01:58:51.254000 audit[6652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=6610 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762376430666166633335656436633031643836396534333032316336 Jan 20 01:58:51.254000 audit: BPF prog-id=187 op=UNLOAD Jan 20 01:58:51.254000 audit[6652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6610 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762376430666166633335656436633031643836396534333032316336 Jan 20 01:58:51.264000 audit: BPF prog-id=188 op=LOAD Jan 20 01:58:51.264000 audit[6652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=6610 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762376430666166633335656436633031643836396534333032316336 Jan 20 01:58:51.264000 audit: BPF prog-id=189 op=LOAD Jan 20 01:58:51.264000 audit[6652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=6610 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762376430666166633335656436633031643836396534333032316336 Jan 20 01:58:51.264000 audit: BPF prog-id=189 op=UNLOAD Jan 20 01:58:51.264000 audit[6652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6610 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762376430666166633335656436633031643836396534333032316336 Jan 20 01:58:51.264000 audit: BPF prog-id=188 op=UNLOAD Jan 20 01:58:51.264000 audit[6652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6610 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762376430666166633335656436633031643836396534333032316336 Jan 20 01:58:51.264000 audit: BPF prog-id=190 op=LOAD Jan 20 01:58:51.264000 audit[6652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=6610 pid=6652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762376430666166633335656436633031643836396534333032316336 Jan 20 01:58:51.367162 systemd-networkd[1543]: calib5830ed241f: Gained IPv6LL Jan 20 01:58:51.604000 audit: BPF prog-id=191 op=LOAD Jan 20 01:58:51.607803 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:51.639391 containerd[1641]: time="2026-01-20T01:58:51.638094495Z" level=info msg="connecting to shim 2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03" address="unix:///run/containerd/s/ccdfd93d576be948f5a38f8398a564a66dd25bbca81ed9f6d7125ce0965e9cda" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:51.747000 audit: BPF prog-id=192 op=LOAD Jan 20 01:58:51.747000 audit[6705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000190238 a2=98 a3=0 items=0 ppid=6657 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396534376661633835636638363334643263666237643066303365 Jan 20 01:58:51.753000 audit: BPF prog-id=192 op=UNLOAD Jan 20 01:58:51.753000 audit[6705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6657 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396534376661633835636638363334643263666237643066303365 Jan 20 01:58:51.755000 audit: BPF prog-id=193 op=LOAD Jan 20 01:58:51.755000 audit[6705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000190488 a2=98 a3=0 items=0 ppid=6657 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396534376661633835636638363334643263666237643066303365 Jan 20 01:58:51.755000 audit: BPF prog-id=194 op=LOAD Jan 20 01:58:51.755000 audit[6705]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000190218 a2=98 a3=0 items=0 ppid=6657 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396534376661633835636638363334643263666237643066303365 Jan 20 01:58:51.755000 audit: BPF prog-id=194 op=UNLOAD Jan 20 01:58:51.755000 audit[6705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6657 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396534376661633835636638363334643263666237643066303365 Jan 20 01:58:51.755000 audit: BPF prog-id=193 op=UNLOAD Jan 20 01:58:51.755000 audit[6705]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6657 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396534376661633835636638363334643263666237643066303365 Jan 20 01:58:51.755000 audit: BPF prog-id=195 op=LOAD Jan 20 01:58:51.755000 audit[6705]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001906e8 a2=98 a3=0 items=0 ppid=6657 pid=6705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:51.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563396534376661633835636638363334643263666237643066303365 Jan 20 01:58:51.899884 systemd-networkd[1543]: cali0701af6d53a: Link UP Jan 20 01:58:51.902437 systemd-networkd[1543]: cali0701af6d53a: Gained carrier Jan 20 01:58:52.023585 containerd[1641]: time="2026-01-20T01:58:51.981141000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9gv2m,Uid:ac1c9092-8cef-4868-9089-0927692efc39,Namespace:calico-system,Attempt:0,} returns sandbox id \"79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204\"" Jan 20 01:58:52.035276 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:52.134430 containerd[1641]: time="2026-01-20T01:58:52.134072192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:41.285 [INFO][6334] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:41.635 [INFO][6334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85cc9877c8--b9ftn-eth0 whisker-85cc9877c8- calico-system 0ce5e731-d9ff-4094-add4-a475d64c6d24 1526 0 2026-01-20 01:58:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85cc9877c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85cc9877c8-b9ftn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0701af6d53a [] [] }} ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:41.635 [INFO][6334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:42.678 [INFO][6405] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" HandleID="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Workload="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:42.678 [INFO][6405] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" HandleID="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Workload="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000327ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85cc9877c8-b9ftn", "timestamp":"2026-01-20 01:58:42.678150221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:42.678 [INFO][6405] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:48.996 [INFO][6405] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:49.022 [INFO][6405] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:49.324 [INFO][6405] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:49.584 [INFO][6405] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:49.934 [INFO][6405] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.120 [INFO][6405] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.231 [INFO][6405] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.245 [INFO][6405] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.289 [INFO][6405] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2 Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.429 [INFO][6405] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.663 [INFO][6405] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.663 [INFO][6405] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" host="localhost" Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.663 [INFO][6405] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:52.428645 containerd[1641]: 2026-01-20 01:58:50.663 [INFO][6405] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" HandleID="k8s-pod-network.93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Workload="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" Jan 20 01:58:52.511198 containerd[1641]: 2026-01-20 01:58:51.637 [INFO][6334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85cc9877c8--b9ftn-eth0", GenerateName:"whisker-85cc9877c8-", Namespace:"calico-system", SelfLink:"", UID:"0ce5e731-d9ff-4094-add4-a475d64c6d24", ResourceVersion:"1526", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85cc9877c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85cc9877c8-b9ftn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0701af6d53a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:52.511198 containerd[1641]: 2026-01-20 01:58:51.637 [INFO][6334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" Jan 20 01:58:52.511198 containerd[1641]: 2026-01-20 01:58:51.637 [INFO][6334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0701af6d53a ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" Jan 20 01:58:52.511198 containerd[1641]: 2026-01-20 01:58:51.885 [INFO][6334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" Jan 20 01:58:52.511198 containerd[1641]: 2026-01-20 01:58:52.092 [INFO][6334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85cc9877c8--b9ftn-eth0", GenerateName:"whisker-85cc9877c8-", Namespace:"calico-system", SelfLink:"", UID:"0ce5e731-d9ff-4094-add4-a475d64c6d24", ResourceVersion:"1526", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 58, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85cc9877c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2", Pod:"whisker-85cc9877c8-b9ftn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0701af6d53a", MAC:"f6:50:e4:ad:93:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:52.511198 containerd[1641]: 2026-01-20 01:58:52.393 [INFO][6334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" Namespace="calico-system" Pod="whisker-85cc9877c8-b9ftn" WorkloadEndpoint="localhost-k8s-whisker--85cc9877c8--b9ftn-eth0" Jan 20 01:58:52.643415 systemd[1]: Started cri-containerd-2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03.scope - libcontainer container 2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03. Jan 20 01:58:52.877950 containerd[1641]: time="2026-01-20T01:58:52.876042854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:52.943219 containerd[1641]: time="2026-01-20T01:58:52.929110649Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:58:52.943219 containerd[1641]: time="2026-01-20T01:58:52.929251358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:53.003010 kubelet[3041]: E0120 01:58:52.998125 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:53.003010 kubelet[3041]: E0120 01:58:52.998197 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:58:53.003010 kubelet[3041]: E0120 01:58:52.999430 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:53.148503 containerd[1641]: time="2026-01-20T01:58:53.104913908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:58:53.609883 containerd[1641]: time="2026-01-20T01:58:53.608480867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-s8m7m,Uid:15d966be-bae7-42a3-83b7-ced10b64bcb2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a\"" Jan 20 01:58:53.671955 containerd[1641]: time="2026-01-20T01:58:53.671169832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:53.676000 audit: BPF prog-id=196 op=LOAD Jan 20 01:58:53.681000 audit: BPF prog-id=197 op=LOAD Jan 20 01:58:53.681000 audit[6807]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=6759 pid=6807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:53.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366337393334336332363965623362356262336537643764393833 Jan 20 01:58:53.681000 audit: BPF prog-id=197 op=UNLOAD Jan 20 01:58:53.681000 audit[6807]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6759 pid=6807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:53.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366337393334336332363965623362356262336537643764393833 Jan 20 01:58:53.687000 audit: BPF prog-id=198 op=LOAD Jan 20 01:58:53.687000 audit[6807]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=6759 pid=6807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:53.687000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366337393334336332363965623362356262336537643764393833 Jan 20 01:58:53.701000 audit: BPF prog-id=199 op=LOAD Jan 20 01:58:53.701000 audit[6807]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=6759 pid=6807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366337393334336332363965623362356262336537643764393833 Jan 20 01:58:53.701000 audit: BPF prog-id=199 op=UNLOAD Jan 20 01:58:53.701000 audit[6807]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6759 pid=6807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366337393334336332363965623362356262336537643764393833 Jan 20 01:58:53.701000 audit: BPF prog-id=198 op=UNLOAD Jan 20 01:58:53.701000 audit[6807]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6759 pid=6807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:53.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366337393334336332363965623362356262336537643764393833 Jan 20 01:58:53.740000 audit: BPF prog-id=200 op=LOAD Jan 20 01:58:53.740000 audit[6807]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=6759 pid=6807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:53.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366337393334336332363965623362356262336537643764393833 Jan 20 01:58:53.777519 systemd-networkd[1543]: calie4f50601363: Link UP Jan 20 01:58:53.791591 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:53.798865 containerd[1641]: time="2026-01-20T01:58:53.798741460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:53.799394 containerd[1641]: time="2026-01-20T01:58:53.799102065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:58:53.812084 kubelet[3041]: E0120 01:58:53.812033 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:53.812239 kubelet[3041]: E0120 01:58:53.812206 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:58:53.812727 kubelet[3041]: E0120 01:58:53.812697 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:53.813045 containerd[1641]: time="2026-01-20T01:58:53.813015087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:58:53.813198 kubelet[3041]: E0120 01:58:53.813159 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:58:53.833728 systemd-networkd[1543]: cali0701af6d53a: Gained IPv6LL Jan 20 01:58:53.906447 containerd[1641]: time="2026-01-20T01:58:53.904881621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xgrtg,Uid:888a237a-dea3-4279-b3e9-e88855e903cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70\"" Jan 20 01:58:53.909298 systemd-networkd[1543]: calie4f50601363: Gained carrier Jan 20 01:58:53.925539 kubelet[3041]: E0120 01:58:53.922111 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:58:53.930529 containerd[1641]: time="2026-01-20T01:58:53.929943810Z" level=info msg="container event discarded" container=7fac54481a13836563ab33c11c75998e9551ab05c40948617e9070b3a8a78643 type=CONTAINER_STARTED_EVENT Jan 20 01:58:53.938636 containerd[1641]: time="2026-01-20T01:58:53.938490386Z" level=info msg="connecting to shim 93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" address="unix:///run/containerd/s/ce27623b44c4afca11cba3abd1c75b671bb640507fecf3992e12b3084f48dd89" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:53.941326 containerd[1641]: time="2026-01-20T01:58:53.941284312Z" level=info msg="CreateContainer within sandbox \"5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:58:54.064465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542856994.mount: Deactivated successfully. Jan 20 01:58:54.070419 containerd[1641]: time="2026-01-20T01:58:54.070298651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:54.075871 containerd[1641]: time="2026-01-20T01:58:54.075777506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:58:54.076449 containerd[1641]: time="2026-01-20T01:58:54.076420292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:54.090764 kubelet[3041]: E0120 01:58:54.086129 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:54.095331 containerd[1641]: time="2026-01-20T01:58:54.095220628Z" level=info msg="Container d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:58:54.148910 kubelet[3041]: E0120 01:58:54.148174 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:58:54.148910 kubelet[3041]: E0120 01:58:54.148324 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:54.148910 kubelet[3041]: E0120 01:58:54.148464 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:40.859 [INFO][6368] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:41.525 [INFO][6368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0 calico-apiserver-776d4dc5d4- calico-apiserver 57304ae8-4142-4837-ab19-941e654eb081 1219 0 2026-01-20 01:54:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:776d4dc5d4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-776d4dc5d4-8w9l5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie4f50601363 [] [] }} ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:41.584 [INFO][6368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:42.785 [INFO][6407] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" HandleID="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Workload="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:42.785 [INFO][6407] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" HandleID="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Workload="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00063d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-776d4dc5d4-8w9l5", "timestamp":"2026-01-20 01:58:42.785107887 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:42.785 [INFO][6407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:50.667 [INFO][6407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:50.668 [INFO][6407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:51.201 [INFO][6407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:51.540 [INFO][6407] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:51.831 [INFO][6407] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:51.937 [INFO][6407] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:52.083 [INFO][6407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:52.083 [INFO][6407] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:52.372 [INFO][6407] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:52.627 [INFO][6407] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:53.187 [INFO][6407] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:53.192 [INFO][6407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" host="localhost" Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:53.192 [INFO][6407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:54.393973 containerd[1641]: 2026-01-20 01:58:53.192 [INFO][6407] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" HandleID="k8s-pod-network.90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Workload="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" Jan 20 01:58:54.398628 containerd[1641]: 2026-01-20 01:58:53.479 [INFO][6368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0", GenerateName:"calico-apiserver-776d4dc5d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"57304ae8-4142-4837-ab19-941e654eb081", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"776d4dc5d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-776d4dc5d4-8w9l5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4f50601363", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:54.398628 containerd[1641]: 2026-01-20 01:58:53.479 [INFO][6368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" Jan 20 01:58:54.398628 containerd[1641]: 2026-01-20 01:58:53.479 [INFO][6368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4f50601363 ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" Jan 20 01:58:54.398628 containerd[1641]: 2026-01-20 01:58:53.871 [INFO][6368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" Jan 20 01:58:54.398628 containerd[1641]: 2026-01-20 01:58:53.939 [INFO][6368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0", GenerateName:"calico-apiserver-776d4dc5d4-", Namespace:"calico-apiserver", SelfLink:"", UID:"57304ae8-4142-4837-ab19-941e654eb081", ResourceVersion:"1219", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"776d4dc5d4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b", Pod:"calico-apiserver-776d4dc5d4-8w9l5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie4f50601363", MAC:"a2:a1:c1:14:6c:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:54.398628 containerd[1641]: 2026-01-20 01:58:54.107 [INFO][6368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" Namespace="calico-apiserver" Pod="calico-apiserver-776d4dc5d4-8w9l5" WorkloadEndpoint="localhost-k8s-calico--apiserver--776d4dc5d4--8w9l5-eth0" Jan 20 01:58:54.513021 containerd[1641]: time="2026-01-20T01:58:54.512842500Z" level=info msg="CreateContainer within sandbox \"5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a\"" Jan 20 01:58:54.538617 containerd[1641]: time="2026-01-20T01:58:54.538569133Z" level=info msg="StartContainer for \"d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a\"" Jan 20 01:58:54.686090 kubelet[3041]: E0120 01:58:54.652126 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:58:54.689853 containerd[1641]: time="2026-01-20T01:58:54.686728004Z" level=info msg="connecting to shim d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a" address="unix:///run/containerd/s/61b37bb296e910db4cd783a6af26ebd39abcc2dfaf56e804961eed1288bd2494" protocol=ttrpc version=3 Jan 20 01:58:54.741000 audit: BPF prog-id=201 op=LOAD Jan 20 01:58:54.741000 audit[6894]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2a9c2930 a2=98 a3=1fffffffffffffff items=0 ppid=6477 pid=6894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.741000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:54.832000 audit: BPF prog-id=201 op=UNLOAD Jan 20 01:58:54.832000 audit[6894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe2a9c2900 a3=0 items=0 ppid=6477 pid=6894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.832000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:54.840000 audit: BPF prog-id=202 op=LOAD Jan 20 01:58:54.840000 audit[6894]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2a9c2810 a2=94 a3=3 items=0 ppid=6477 pid=6894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.840000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:54.841000 audit: BPF prog-id=202 op=UNLOAD Jan 20 01:58:54.841000 audit[6894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe2a9c2810 a2=94 a3=3 items=0 ppid=6477 pid=6894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.841000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:54.841000 audit: BPF prog-id=203 op=LOAD Jan 20 01:58:54.841000 audit[6894]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe2a9c2850 a2=94 a3=7ffe2a9c2a30 items=0 ppid=6477 pid=6894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.841000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:54.841000 audit: BPF prog-id=203 op=UNLOAD Jan 20 01:58:54.841000 audit[6894]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe2a9c2850 a2=94 a3=7ffe2a9c2a30 items=0 ppid=6477 pid=6894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.841000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 01:58:54.878000 audit: BPF prog-id=204 op=LOAD Jan 20 01:58:54.878000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcb42d1ca0 a2=98 a3=3 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.878000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:54.878000 audit: BPF prog-id=204 op=UNLOAD Jan 20 01:58:54.878000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcb42d1c70 a3=0 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.878000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:54.881000 audit: BPF prog-id=205 op=LOAD Jan 20 01:58:54.881000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb42d1a90 a2=94 a3=54428f items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:54.881000 audit: BPF prog-id=205 op=UNLOAD Jan 20 01:58:54.881000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb42d1a90 a2=94 a3=54428f items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:54.881000 audit: BPF prog-id=206 op=LOAD Jan 20 01:58:54.881000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb42d1ac0 a2=94 a3=2 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:54.881000 audit: BPF prog-id=206 op=UNLOAD Jan 20 01:58:54.881000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb42d1ac0 a2=0 a3=2 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:54.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:58:55.216131 kubelet[3041]: E0120 01:58:55.215960 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:58:55.496732 systemd-networkd[1543]: calie4f50601363: Gained IPv6LL Jan 20 01:58:55.759330 systemd-networkd[1543]: calic47060c57b5: Link UP Jan 20 01:58:55.844224 systemd-networkd[1543]: calic47060c57b5: Gained carrier Jan 20 01:58:55.931099 containerd[1641]: time="2026-01-20T01:58:55.931045473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cb6ddc686-fcv7l,Uid:ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03\"" Jan 20 01:58:56.022678 containerd[1641]: time="2026-01-20T01:58:56.022030697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:58:56.082115 systemd[1]: Started cri-containerd-93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2.scope - libcontainer container 93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2. Jan 20 01:58:56.376737 containerd[1641]: time="2026-01-20T01:58:56.369295548Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:56.442561 kubelet[3041]: E0120 01:58:56.396713 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:58:56.496404 systemd[1]: Started cri-containerd-d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a.scope - libcontainer container d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a. Jan 20 01:58:56.608618 kernel: kauditd_printk_skb: 114 callbacks suppressed Jan 20 01:58:56.621080 kernel: audit: type=1325 audit(1768874336.509:652): table=filter:121 family=2 entries=20 op=nft_register_rule pid=6922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:56.509000 audit[6922]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=6922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:56.621508 containerd[1641]: time="2026-01-20T01:58:56.542515523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:56.621508 containerd[1641]: time="2026-01-20T01:58:56.542657165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:58:56.509000 audit[6922]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffed76cc60 a2=0 a3=7fffed76cc4c items=0 ppid=3158 pid=6922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:56.640032 kubelet[3041]: E0120 01:58:56.623502 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:56.640319 kubelet[3041]: E0120 01:58:56.640271 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:58:56.640597 kubelet[3041]: E0120 01:58:56.640563 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:56.647566 kubelet[3041]: E0120 01:58:56.640740 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:58:56.694219 containerd[1641]: time="2026-01-20T01:58:56.694026060Z" level=info msg="container event discarded" container=53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23 type=CONTAINER_CREATED_EVENT Jan 20 01:58:56.700861 containerd[1641]: time="2026-01-20T01:58:56.699674970Z" level=info msg="container event discarded" container=53ae73f36a5db7472eb3eae4332cb63fdcc3ce016b009f67a649e4f4395cdc23 type=CONTAINER_STARTED_EVENT Jan 20 01:58:56.703060 containerd[1641]: time="2026-01-20T01:58:56.696267593Z" level=info msg="connecting to shim 90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" address="unix:///run/containerd/s/daf66d6d42de7f88a9bede38fd061ab9f8e4ebd1f5e561ded034bb8429fa3f92" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:41.234 [INFO][6308] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:41.836 [INFO][6308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--pxsrr-eth0 goldmane-7c778bb748- calico-system 738fa74e-ddb6-4c59-8db5-d8c8658e06b6 1224 0 2026-01-20 01:55:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-pxsrr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic47060c57b5 [] [] }} ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:41.836 [INFO][6308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:42.802 [INFO][6424] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" HandleID="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Workload="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:42.803 [INFO][6424] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" HandleID="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Workload="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131f50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-pxsrr", "timestamp":"2026-01-20 01:58:42.80286433 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:42.803 [INFO][6424] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:53.214 [INFO][6424] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:53.219 [INFO][6424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:53.631 [INFO][6424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:53.946 [INFO][6424] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:54.246 [INFO][6424] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:54.275 [INFO][6424] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:54.297 [INFO][6424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:54.298 [INFO][6424] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:54.920 [INFO][6424] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:55.027 [INFO][6424] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:55.130 [INFO][6424] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:55.130 [INFO][6424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" host="localhost" Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:55.139 [INFO][6424] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:56.712411 containerd[1641]: 2026-01-20 01:58:55.147 [INFO][6424] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" HandleID="k8s-pod-network.057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Workload="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" Jan 20 01:58:56.719200 containerd[1641]: 2026-01-20 01:58:55.492 [INFO][6308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pxsrr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"738fa74e-ddb6-4c59-8db5-d8c8658e06b6", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-pxsrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic47060c57b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:56.719200 containerd[1641]: 2026-01-20 01:58:55.493 [INFO][6308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" Jan 20 01:58:56.719200 containerd[1641]: 2026-01-20 01:58:55.493 [INFO][6308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic47060c57b5 ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" Jan 20 01:58:56.719200 containerd[1641]: 2026-01-20 01:58:55.771 [INFO][6308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" Jan 20 01:58:56.719200 containerd[1641]: 2026-01-20 01:58:55.777 [INFO][6308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--pxsrr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"738fa74e-ddb6-4c59-8db5-d8c8658e06b6", ResourceVersion:"1224", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 55, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e", Pod:"goldmane-7c778bb748-pxsrr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic47060c57b5", MAC:"b6:f7:b9:e0:ec:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:56.719200 containerd[1641]: 2026-01-20 01:58:56.244 [INFO][6308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" Namespace="calico-system" Pod="goldmane-7c778bb748-pxsrr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--pxsrr-eth0" Jan 20 01:58:56.817482 kernel: audit: type=1300 audit(1768874336.509:652): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffed76cc60 a2=0 a3=7fffed76cc4c items=0 ppid=3158 pid=6922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:56.817565 kernel: audit: type=1327 audit(1768874336.509:652): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:56.817611 kernel: audit: type=1325 audit(1768874336.629:653): table=nat:122 family=2 entries=14 op=nft_register_rule pid=6922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:56.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:56.629000 audit[6922]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=6922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:58:56.907414 kernel: audit: type=1300 audit(1768874336.629:653): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffed76cc60 a2=0 a3=0 items=0 ppid=3158 pid=6922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:56.629000 audit[6922]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffed76cc60 a2=0 a3=0 items=0 ppid=3158 pid=6922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.018773 kernel: audit: type=1327 audit(1768874336.629:653): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:56.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:58:57.278138 kernel: audit: type=1334 audit(1768874337.138:654): prog-id=207 op=LOAD Jan 20 01:58:57.278285 kernel: audit: type=1334 audit(1768874337.141:655): prog-id=208 op=LOAD Jan 20 01:58:57.278319 kernel: audit: type=1300 audit(1768874337.141:655): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.138000 audit: BPF prog-id=207 op=LOAD Jan 20 01:58:57.141000 audit: BPF prog-id=208 op=LOAD Jan 20 01:58:57.141000 audit[6870]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.228428 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:58:57.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.449141 kernel: audit: type=1327 audit(1768874337.141:655): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.141000 audit: BPF prog-id=208 op=UNLOAD Jan 20 01:58:57.141000 audit[6870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.143000 audit: BPF prog-id=209 op=LOAD Jan 20 01:58:57.143000 audit[6870]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.143000 audit: BPF prog-id=210 op=LOAD Jan 20 01:58:57.143000 audit[6870]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.143000 audit: BPF prog-id=210 op=UNLOAD Jan 20 01:58:57.143000 audit[6870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.143000 audit: BPF prog-id=209 op=UNLOAD Jan 20 01:58:57.143000 audit[6870]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.143000 audit: BPF prog-id=211 op=LOAD Jan 20 01:58:57.143000 audit[6870]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6855 pid=6870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933636266656639373864666561653536383832313266626164636561 Jan 20 01:58:57.482000 audit: BPF prog-id=212 op=LOAD Jan 20 01:58:57.497000 audit: BPF prog-id=213 op=LOAD Jan 20 01:58:57.497000 audit[6895]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=6657 pid=6895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436633164343638636361323031626435653337303233306632393338 Jan 20 01:58:57.497000 audit: BPF prog-id=213 op=UNLOAD Jan 20 01:58:57.497000 audit[6895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6657 pid=6895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436633164343638636361323031626435653337303233306632393338 Jan 20 01:58:57.497000 audit: BPF prog-id=214 op=LOAD Jan 20 01:58:57.497000 audit[6895]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=6657 pid=6895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436633164343638636361323031626435653337303233306632393338 Jan 20 01:58:57.497000 audit: BPF prog-id=215 op=LOAD Jan 20 01:58:57.497000 audit[6895]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=6657 pid=6895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436633164343638636361323031626435653337303233306632393338 Jan 20 01:58:57.497000 audit: BPF prog-id=215 op=UNLOAD Jan 20 01:58:57.497000 audit[6895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6657 pid=6895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436633164343638636361323031626435653337303233306632393338 Jan 20 01:58:57.497000 audit: BPF prog-id=214 op=UNLOAD Jan 20 01:58:57.497000 audit[6895]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6657 pid=6895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436633164343638636361323031626435653337303233306632393338 Jan 20 01:58:57.497000 audit: BPF prog-id=216 op=LOAD Jan 20 01:58:57.497000 audit[6895]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=6657 pid=6895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:57.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436633164343638636361323031626435653337303233306632393338 Jan 20 01:58:57.540149 kubelet[3041]: E0120 01:58:57.524831 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:58:57.617046 systemd-networkd[1543]: calic47060c57b5: Gained IPv6LL Jan 20 01:58:57.837607 systemd-networkd[1543]: cali769528b8bd7: Link UP Jan 20 01:58:57.871550 systemd-networkd[1543]: cali769528b8bd7: Gained carrier Jan 20 01:58:58.150580 containerd[1641]: time="2026-01-20T01:58:58.150255754Z" level=error msg="get state for 93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2" error="context deadline exceeded" Jan 20 01:58:58.181069 containerd[1641]: time="2026-01-20T01:58:58.181003962Z" level=warning msg="unknown status" status=0 Jan 20 01:58:58.219790 containerd[1641]: time="2026-01-20T01:58:58.219685531Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:58:58.322322 systemd[1]: Started cri-containerd-90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b.scope - libcontainer container 90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b. Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:46.267 [INFO][6461] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:46.789 [INFO][6461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--ddv57-eth0 coredns-66bc5c9577- kube-system b4e4578c-79c2-452b-9829-4499e381b357 1215 0 2026-01-20 01:53:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-ddv57 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali769528b8bd7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:46.812 [INFO][6461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:48.731 [INFO][6589] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" HandleID="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Workload="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:48.796 [INFO][6589] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" HandleID="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Workload="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398200), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-ddv57", "timestamp":"2026-01-20 01:58:48.731025522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:48.797 [INFO][6589] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:55.139 [INFO][6589] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:55.139 [INFO][6589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:55.761 [INFO][6589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:56.130 [INFO][6589] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:56.484 [INFO][6589] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:56.550 [INFO][6589] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:56.844 [INFO][6589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:56.844 [INFO][6589] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:56.934 [INFO][6589] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08 Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:57.340 [INFO][6589] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:57.564 [INFO][6589] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:57.564 [INFO][6589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" host="localhost" Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:57.564 [INFO][6589] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:58:58.927885 containerd[1641]: 2026-01-20 01:58:57.564 [INFO][6589] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" HandleID="k8s-pod-network.e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Workload="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" Jan 20 01:58:58.928890 containerd[1641]: 2026-01-20 01:58:57.786 [INFO][6461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ddv57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4e4578c-79c2-452b-9829-4499e381b357", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-ddv57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali769528b8bd7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:58.928890 containerd[1641]: 2026-01-20 01:58:57.791 [INFO][6461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" Jan 20 01:58:58.928890 containerd[1641]: 2026-01-20 01:58:57.791 [INFO][6461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali769528b8bd7 ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" Jan 20 01:58:58.928890 containerd[1641]: 2026-01-20 01:58:57.873 [INFO][6461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" Jan 20 01:58:58.928890 containerd[1641]: 2026-01-20 01:58:58.183 [INFO][6461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ddv57-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b4e4578c-79c2-452b-9829-4499e381b357", ResourceVersion:"1215", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 53, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08", Pod:"coredns-66bc5c9577-ddv57", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali769528b8bd7", MAC:"6e:08:1c:7f:60:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:58:58.928890 containerd[1641]: 2026-01-20 01:58:58.671 [INFO][6461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" Namespace="kube-system" Pod="coredns-66bc5c9577-ddv57" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ddv57-eth0" Jan 20 01:58:59.161532 containerd[1641]: time="2026-01-20T01:58:59.161382423Z" level=info msg="StartContainer for \"d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a\" returns successfully" Jan 20 01:58:59.175293 containerd[1641]: time="2026-01-20T01:58:59.175203341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cc9877c8-b9ftn,Uid:0ce5e731-d9ff-4094-add4-a475d64c6d24,Namespace:calico-system,Attempt:0,} returns sandbox id \"93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2\"" Jan 20 01:58:59.399127 containerd[1641]: time="2026-01-20T01:58:59.398890282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:58:59.411664 containerd[1641]: time="2026-01-20T01:58:59.411451654Z" level=info msg="connecting to shim 057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e" address="unix:///run/containerd/s/6142f186f9e459b1edc20a218632e0f31e3c197a8382123cf2067111c01bbde6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:58:59.761000 audit: BPF prog-id=217 op=LOAD Jan 20 01:58:59.801102 containerd[1641]: time="2026-01-20T01:58:59.780655039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:58:59.801102 containerd[1641]: time="2026-01-20T01:58:59.795958144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:58:59.801102 containerd[1641]: time="2026-01-20T01:58:59.800744996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:58:59.830270 kubelet[3041]: E0120 01:58:59.814971 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:58:59.830270 kubelet[3041]: E0120 01:58:59.815028 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:58:59.830270 kubelet[3041]: E0120 01:58:59.817709 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:58:59.902000 audit: BPF prog-id=218 op=LOAD Jan 20 01:58:59.902000 audit[6960]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=6928 pid=6960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:59.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623237396261633832616433336437313436363433643932626232 Jan 20 01:58:59.902000 audit: BPF prog-id=218 op=UNLOAD Jan 20 01:58:59.902000 audit[6960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6928 pid=6960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:59.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623237396261633832616433336437313436363433643932626232 Jan 20 01:58:59.923176 systemd-networkd[1543]: cali769528b8bd7: Gained IPv6LL Jan 20 01:58:59.939000 audit: BPF prog-id=219 op=LOAD Jan 20 01:58:59.939000 audit[6960]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=6928 pid=6960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:59.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623237396261633832616433336437313436363433643932626232 Jan 20 01:58:59.951000 audit: BPF prog-id=220 op=LOAD Jan 20 01:58:59.951000 audit[6960]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=6928 pid=6960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:59.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623237396261633832616433336437313436363433643932626232 Jan 20 01:58:59.951000 audit: BPF prog-id=220 op=UNLOAD Jan 20 01:58:59.951000 audit[6960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=6928 pid=6960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:59.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623237396261633832616433336437313436363433643932626232 Jan 20 01:58:59.951000 audit: BPF prog-id=219 op=UNLOAD Jan 20 01:58:59.951000 audit[6960]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6928 pid=6960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:59.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623237396261633832616433336437313436363433643932626232 Jan 20 01:58:59.975133 containerd[1641]: time="2026-01-20T01:58:59.975090471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:58:59.951000 audit: BPF prog-id=221 op=LOAD Jan 20 01:58:59.951000 audit[6960]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=6928 pid=6960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:58:59.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930623237396261633832616433336437313436363433643932626232 Jan 20 01:59:00.047550 kubelet[3041]: E0120 01:59:00.047505 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:00.132132 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:59:00.163689 containerd[1641]: time="2026-01-20T01:59:00.137696875Z" level=info msg="connecting to shim e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08" address="unix:///run/containerd/s/3083ae1a54f5f8e90775daa2b61246f856f986ddde2c8c55735f932e4d255877" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:59:00.321795 containerd[1641]: time="2026-01-20T01:59:00.317815798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:00.387301 containerd[1641]: time="2026-01-20T01:59:00.370407467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:59:00.387301 containerd[1641]: time="2026-01-20T01:59:00.370578994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:00.387529 kubelet[3041]: E0120 01:59:00.370902 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:00.387529 kubelet[3041]: E0120 01:59:00.370958 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:00.387529 kubelet[3041]: E0120 01:59:00.371045 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:00.387529 kubelet[3041]: E0120 01:59:00.371092 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 01:59:00.936478 systemd[1]: Started cri-containerd-057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e.scope - libcontainer container 057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e. Jan 20 01:59:01.199082 containerd[1641]: time="2026-01-20T01:59:01.147933479Z" level=error msg="get state for 90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b" error="context deadline exceeded" Jan 20 01:59:01.199082 containerd[1641]: time="2026-01-20T01:59:01.148032672Z" level=warning msg="unknown status" status=0 Jan 20 01:59:01.199040 systemd-networkd[1543]: calic19ff182469: Link UP Jan 20 01:59:01.199493 systemd-networkd[1543]: calic19ff182469: Gained carrier Jan 20 01:59:01.547236 kubelet[3041]: E0120 01:59:01.538568 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:01.567623 kubelet[3041]: I0120 01:59:01.554082 3041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xgrtg" podStartSLOduration=326.554055326 podStartE2EDuration="5m26.554055326s" podCreationTimestamp="2026-01-20 01:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:59:00.446295693 +0000 UTC m=+326.450984588" watchObservedRunningTime="2026-01-20 01:59:01.554055326 +0000 UTC m=+327.558744242" Jan 20 01:59:01.659869 kubelet[3041]: E0120 01:59:01.598298 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 01:59:01.748577 systemd[1]: Started cri-containerd-e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08.scope - libcontainer container e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08. Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:46.800 [INFO][6511] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:47.375 [INFO][6511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0 calico-apiserver-6576c69f97- calico-apiserver e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0 1220 0 2026-01-20 01:54:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6576c69f97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6576c69f97-z9lbz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic19ff182469 [] [] }} ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:47.376 [INFO][6511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:49.133 [INFO][6606] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" HandleID="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Workload="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:49.133 [INFO][6606] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" HandleID="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Workload="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00028e670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6576c69f97-z9lbz", "timestamp":"2026-01-20 01:58:49.133153036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:49.133 [INFO][6606] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:57.574 [INFO][6606] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:57.575 [INFO][6606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:58.087 [INFO][6606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:58.916 [INFO][6606] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:59.408 [INFO][6606] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:59.563 [INFO][6606] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:59.708 [INFO][6606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:59.708 [INFO][6606] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:58:59.868 [INFO][6606] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0 Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:59:00.162 [INFO][6606] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:59:00.461 [INFO][6606] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:59:00.461 [INFO][6606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" host="localhost" Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:59:00.461 [INFO][6606] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:59:01.817803 containerd[1641]: 2026-01-20 01:59:00.461 [INFO][6606] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" HandleID="k8s-pod-network.93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Workload="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" Jan 20 01:59:01.821686 containerd[1641]: 2026-01-20 01:59:00.584 [INFO][6511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0", GenerateName:"calico-apiserver-6576c69f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6576c69f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6576c69f97-z9lbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic19ff182469", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:59:01.821686 containerd[1641]: 2026-01-20 01:59:00.584 [INFO][6511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" Jan 20 01:59:01.821686 containerd[1641]: 2026-01-20 01:59:00.584 [INFO][6511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic19ff182469 ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" Jan 20 01:59:01.821686 containerd[1641]: 2026-01-20 01:59:01.047 [INFO][6511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" Jan 20 01:59:01.821686 containerd[1641]: 2026-01-20 01:59:01.047 [INFO][6511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0", GenerateName:"calico-apiserver-6576c69f97-", Namespace:"calico-apiserver", SelfLink:"", UID:"e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0", ResourceVersion:"1220", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6576c69f97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0", Pod:"calico-apiserver-6576c69f97-z9lbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic19ff182469", MAC:"3a:9c:11:e7:d8:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:59:01.821686 containerd[1641]: 2026-01-20 01:59:01.586 [INFO][6511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" Namespace="calico-apiserver" Pod="calico-apiserver-6576c69f97-z9lbz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6576c69f97--z9lbz-eth0" Jan 20 01:59:02.022000 audit[7075]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=7075 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:02.032869 kernel: kauditd_printk_skb: 62 callbacks suppressed Jan 20 01:59:02.033028 kernel: audit: type=1325 audit(1768874342.022:678): table=filter:123 family=2 entries=20 op=nft_register_rule pid=7075 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:02.118404 kernel: audit: type=1334 audit(1768874342.095:679): prog-id=222 op=LOAD Jan 20 01:59:02.095000 audit: BPF prog-id=222 op=LOAD Jan 20 01:59:02.095000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb42d1980 a2=94 a3=1 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.190207 kernel: audit: type=1300 audit(1768874342.095:679): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcb42d1980 a2=94 a3=1 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.095000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.211074 kernel: audit: type=1327 audit(1768874342.095:679): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.095000 audit: BPF prog-id=222 op=UNLOAD Jan 20 01:59:02.270471 kernel: audit: type=1334 audit(1768874342.095:680): prog-id=222 op=UNLOAD Jan 20 01:59:02.270611 kernel: audit: type=1300 audit(1768874342.095:680): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb42d1980 a2=94 a3=1 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.095000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcb42d1980 a2=94 a3=1 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.095000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.303833 kernel: audit: type=1327 audit(1768874342.095:680): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.341031 kernel: audit: type=1334 audit(1768874342.111:681): prog-id=223 op=LOAD Jan 20 01:59:02.111000 audit: BPF prog-id=223 op=LOAD Jan 20 01:59:02.111000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb42d1970 a2=94 a3=4 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.446436 kernel: audit: type=1300 audit(1768874342.111:681): arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb42d1970 a2=94 a3=4 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.471508 kernel: audit: type=1327 audit(1768874342.111:681): proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.111000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.111000 audit: BPF prog-id=223 op=UNLOAD Jan 20 01:59:02.111000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcb42d1970 a2=0 a3=4 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.111000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.112000 audit: BPF prog-id=224 op=LOAD Jan 20 01:59:02.112000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcb42d17d0 a2=94 a3=5 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.112000 audit: BPF prog-id=224 op=UNLOAD Jan 20 01:59:02.112000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcb42d17d0 a2=0 a3=5 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.112000 audit: BPF prog-id=225 op=LOAD Jan 20 01:59:02.112000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb42d19f0 a2=94 a3=6 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.112000 audit: BPF prog-id=225 op=UNLOAD Jan 20 01:59:02.112000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcb42d19f0 a2=0 a3=6 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.112000 audit: BPF prog-id=226 op=LOAD Jan 20 01:59:02.112000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcb42d11a0 a2=94 a3=88 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.112000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.113000 audit: BPF prog-id=227 op=LOAD Jan 20 01:59:02.113000 audit[6896]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcb42d1020 a2=94 a3=2 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.113000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.113000 audit: BPF prog-id=227 op=UNLOAD Jan 20 01:59:02.113000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcb42d1050 a2=0 a3=7ffcb42d1150 items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.113000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.114000 audit: BPF prog-id=226 op=UNLOAD Jan 20 01:59:02.114000 audit[6896]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=11488d10 a2=0 a3=9436286965429a6b items=0 ppid=6477 pid=6896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.114000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 01:59:02.022000 audit[7075]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcb041bed0 a2=0 a3=7ffcb041bebc items=0 ppid=3158 pid=7075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:02.228000 audit[7075]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=7075 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:02.228000 audit[7075]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcb041bed0 a2=0 a3=0 items=0 ppid=3158 pid=7075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:02.302000 audit: BPF prog-id=228 op=LOAD Jan 20 01:59:02.302000 audit[7099]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc261abac0 a2=98 a3=1999999999999999 items=0 ppid=6477 pid=7099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:59:02.302000 audit: BPF prog-id=228 op=UNLOAD Jan 20 01:59:02.302000 audit[7099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc261aba90 a3=0 items=0 ppid=6477 pid=7099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:59:02.302000 audit: BPF prog-id=229 op=LOAD Jan 20 01:59:02.302000 audit[7099]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc261ab9a0 a2=94 a3=ffff items=0 ppid=6477 pid=7099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.302000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:59:02.311000 audit: BPF prog-id=229 op=UNLOAD Jan 20 01:59:02.311000 audit[7099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc261ab9a0 a2=94 a3=ffff items=0 ppid=6477 pid=7099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.311000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:59:02.311000 audit: BPF prog-id=230 op=LOAD Jan 20 01:59:02.311000 audit[7099]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc261ab9e0 a2=94 a3=7ffc261abbc0 items=0 ppid=6477 pid=7099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.311000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:59:02.464000 audit: BPF prog-id=231 op=LOAD Jan 20 01:59:02.508000 audit: BPF prog-id=232 op=LOAD Jan 20 01:59:02.508000 audit[7058]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=7033 pid=7058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346261643063306465393831353536656638626432643861663833 Jan 20 01:59:02.508000 audit: BPF prog-id=232 op=UNLOAD Jan 20 01:59:02.508000 audit[7058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7033 pid=7058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346261643063306465393831353536656638626432643861663833 Jan 20 01:59:02.508000 audit: BPF prog-id=233 op=LOAD Jan 20 01:59:02.508000 audit[7058]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=7033 pid=7058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346261643063306465393831353536656638626432643861663833 Jan 20 01:59:02.508000 audit: BPF prog-id=234 op=LOAD Jan 20 01:59:02.508000 audit[7058]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=7033 pid=7058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346261643063306465393831353536656638626432643861663833 Jan 20 01:59:02.508000 audit: BPF prog-id=234 op=UNLOAD Jan 20 01:59:02.508000 audit[7058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7033 pid=7058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346261643063306465393831353536656638626432643861663833 Jan 20 01:59:02.508000 audit: BPF prog-id=233 op=UNLOAD Jan 20 01:59:02.508000 audit[7058]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7033 pid=7058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346261643063306465393831353536656638626432643861663833 Jan 20 01:59:02.508000 audit: BPF prog-id=235 op=LOAD Jan 20 01:59:02.508000 audit[7058]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=7033 pid=7058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346261643063306465393831353536656638626432643861663833 Jan 20 01:59:02.525203 containerd[1641]: time="2026-01-20T01:59:02.480631444Z" level=info msg="connecting to shim 93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0" address="unix:///run/containerd/s/c26b14d4c55d5f716ed926372f063fa61a65260bdf6c38034fdda4d398e765ae" namespace=k8s.io protocol=ttrpc version=3 Jan 20 01:59:02.562153 kubelet[3041]: E0120 01:59:02.551270 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:02.561000 audit: BPF prog-id=230 op=UNLOAD Jan 20 01:59:02.561000 audit[7099]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc261ab9e0 a2=94 a3=7ffc261abbc0 items=0 ppid=6477 pid=7099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:02.561000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 01:59:02.610597 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:59:02.919397 systemd-networkd[1543]: calic19ff182469: Gained IPv6LL Jan 20 01:59:02.982773 containerd[1641]: time="2026-01-20T01:59:02.975003322Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 20 01:59:03.641000 audit: BPF prog-id=236 op=LOAD Jan 20 01:59:03.661000 audit: BPF prog-id=237 op=LOAD Jan 20 01:59:03.661000 audit[7035]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000214238 a2=98 a3=0 items=0 ppid=7012 pid=7035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:03.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376664343566343465643463623732616330616336363163626338 Jan 20 01:59:03.681000 audit: BPF prog-id=237 op=UNLOAD Jan 20 01:59:03.681000 audit[7035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7012 pid=7035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:03.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376664343566343465643463623732616330616336363163626338 Jan 20 01:59:03.684000 audit: BPF prog-id=238 op=LOAD Jan 20 01:59:03.684000 audit[7035]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000214488 a2=98 a3=0 items=0 ppid=7012 pid=7035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:03.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376664343566343465643463623732616330616336363163626338 Jan 20 01:59:03.684000 audit: BPF prog-id=239 op=LOAD Jan 20 01:59:03.684000 audit[7035]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000214218 a2=98 a3=0 items=0 ppid=7012 pid=7035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:03.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376664343566343465643463623732616330616336363163626338 Jan 20 01:59:03.684000 audit: BPF prog-id=239 op=UNLOAD Jan 20 01:59:03.684000 audit[7035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7012 pid=7035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:03.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376664343566343465643463623732616330616336363163626338 Jan 20 01:59:03.684000 audit: BPF prog-id=238 op=UNLOAD Jan 20 01:59:03.684000 audit[7035]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7012 pid=7035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:03.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376664343566343465643463623732616330616336363163626338 Jan 20 01:59:03.684000 audit: BPF prog-id=240 op=LOAD Jan 20 01:59:03.684000 audit[7035]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002146e8 a2=98 a3=0 items=0 ppid=7012 pid=7035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:03.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035376664343566343465643463623732616330616336363163626338 Jan 20 01:59:03.815064 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:59:04.184583 systemd[1]: Started cri-containerd-93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0.scope - libcontainer container 93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0. Jan 20 01:59:04.328000 audit[7162]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=7162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:04.328000 audit[7162]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeef354680 a2=0 a3=7ffeef35466c items=0 ppid=3158 pid=7162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:04.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:04.520657 kubelet[3041]: E0120 01:59:04.434147 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:04.525269 containerd[1641]: time="2026-01-20T01:59:04.416328222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ddv57,Uid:b4e4578c-79c2-452b-9829-4499e381b357,Namespace:kube-system,Attempt:0,} returns sandbox id \"e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08\"" Jan 20 01:59:04.632620 containerd[1641]: time="2026-01-20T01:59:04.632570668Z" level=info msg="CreateContainer within sandbox \"e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:59:04.673000 audit[7162]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=7162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:04.673000 audit[7162]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeef354680 a2=0 a3=0 items=0 ppid=3158 pid=7162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:04.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:04.856000 audit: BPF prog-id=241 op=LOAD Jan 20 01:59:05.016398 containerd[1641]: time="2026-01-20T01:59:05.009119776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776d4dc5d4-8w9l5,Uid:57304ae8-4142-4837-ab19-941e654eb081,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b\"" Jan 20 01:59:05.054000 audit: BPF prog-id=242 op=LOAD Jan 20 01:59:05.054000 audit[7122]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001be238 a2=98 a3=0 items=0 ppid=7108 pid=7122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:05.054000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393635633736363635316262366461363135386662316238316433 Jan 20 01:59:05.147000 audit: BPF prog-id=242 op=UNLOAD Jan 20 01:59:05.147000 audit[7122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=7108 pid=7122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:05.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393635633736363635316262366461363135386662316238316433 Jan 20 01:59:05.147000 audit: BPF prog-id=243 op=LOAD Jan 20 01:59:05.147000 audit[7122]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001be488 a2=98 a3=0 items=0 ppid=7108 pid=7122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:05.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393635633736363635316262366461363135386662316238316433 Jan 20 01:59:05.147000 audit: BPF prog-id=244 op=LOAD Jan 20 01:59:05.147000 audit[7122]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001be218 a2=98 a3=0 items=0 ppid=7108 pid=7122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:05.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393635633736363635316262366461363135386662316238316433 Jan 20 01:59:05.147000 audit: BPF prog-id=244 op=UNLOAD Jan 20 01:59:05.147000 audit[7122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=7108 pid=7122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:05.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393635633736363635316262366461363135386662316238316433 Jan 20 01:59:05.147000 audit: BPF prog-id=243 op=UNLOAD Jan 20 01:59:05.147000 audit[7122]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=7108 pid=7122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:05.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393635633736363635316262366461363135386662316238316433 Jan 20 01:59:05.147000 audit: BPF prog-id=245 op=LOAD Jan 20 01:59:05.147000 audit[7122]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001be6e8 a2=98 a3=0 items=0 ppid=7108 pid=7122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:05.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393635633736363635316262366461363135386662316238316433 Jan 20 01:59:05.211239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075602559.mount: Deactivated successfully. Jan 20 01:59:05.437434 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 20 01:59:05.636878 containerd[1641]: time="2026-01-20T01:59:05.612020559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:05.935543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560191800.mount: Deactivated successfully. Jan 20 01:59:05.986733 containerd[1641]: time="2026-01-20T01:59:05.973593972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:06.115196 containerd[1641]: time="2026-01-20T01:59:06.098935557Z" level=info msg="Container 3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86: CDI devices from CRI Config.CDIDevices: []" Jan 20 01:59:06.130413 containerd[1641]: time="2026-01-20T01:59:06.129755984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:06.130413 containerd[1641]: time="2026-01-20T01:59:06.130137840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:06.161177 kubelet[3041]: E0120 01:59:06.138588 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:06.161177 kubelet[3041]: E0120 01:59:06.145723 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:06.161177 kubelet[3041]: E0120 01:59:06.146027 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:06.161177 kubelet[3041]: E0120 01:59:06.146081 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:59:06.162849 containerd[1641]: time="2026-01-20T01:59:06.162809129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:59:06.267970 containerd[1641]: time="2026-01-20T01:59:06.237215879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-pxsrr,Uid:738fa74e-ddb6-4c59-8db5-d8c8658e06b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e\"" Jan 20 01:59:06.538682 containerd[1641]: time="2026-01-20T01:59:06.536472021Z" level=info msg="CreateContainer within sandbox \"e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86\"" Jan 20 01:59:06.548740 containerd[1641]: time="2026-01-20T01:59:06.548528759Z" level=info msg="StartContainer for \"3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86\"" Jan 20 01:59:06.555895 systemd-networkd[1543]: vxlan.calico: Link UP Jan 20 01:59:06.555903 systemd-networkd[1543]: vxlan.calico: Gained carrier Jan 20 01:59:06.584071 containerd[1641]: time="2026-01-20T01:59:06.579646186Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:06.621833 containerd[1641]: time="2026-01-20T01:59:06.602459184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:06.662138 containerd[1641]: time="2026-01-20T01:59:06.662011679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:59:06.663251 kubelet[3041]: E0120 01:59:06.663164 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:59:06.666034 kubelet[3041]: E0120 01:59:06.665635 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:59:06.669552 kubelet[3041]: E0120 01:59:06.669516 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:06.671232 containerd[1641]: time="2026-01-20T01:59:06.671197519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:59:06.673212 containerd[1641]: time="2026-01-20T01:59:06.673181176Z" level=info msg="connecting to shim 3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86" address="unix:///run/containerd/s/3083ae1a54f5f8e90775daa2b61246f856f986ddde2c8c55735f932e4d255877" protocol=ttrpc version=3 Jan 20 01:59:06.682694 containerd[1641]: time="2026-01-20T01:59:06.682527621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6576c69f97-z9lbz,Uid:e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0\"" Jan 20 01:59:06.838244 containerd[1641]: time="2026-01-20T01:59:06.838112692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:06.879790 containerd[1641]: time="2026-01-20T01:59:06.879721502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:06.880206 containerd[1641]: time="2026-01-20T01:59:06.880105091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:59:06.901527 kubelet[3041]: E0120 01:59:06.894156 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:06.901527 kubelet[3041]: E0120 01:59:06.894241 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:06.901527 kubelet[3041]: E0120 01:59:06.894811 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:06.901527 kubelet[3041]: E0120 01:59:06.894862 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:59:06.901884 containerd[1641]: time="2026-01-20T01:59:06.896039384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:59:07.272958 containerd[1641]: time="2026-01-20T01:59:07.272200840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:07.425721 containerd[1641]: time="2026-01-20T01:59:07.398278406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:07.425721 containerd[1641]: time="2026-01-20T01:59:07.398427731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:59:07.464323 kubelet[3041]: E0120 01:59:07.426230 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:59:07.464323 kubelet[3041]: E0120 01:59:07.426302 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:59:07.464323 kubelet[3041]: E0120 01:59:07.426608 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:07.477637 kubelet[3041]: E0120 01:59:07.473789 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:59:07.488837 containerd[1641]: time="2026-01-20T01:59:07.485325529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:07.605800 systemd[1]: Started cri-containerd-3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86.scope - libcontainer container 3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86. Jan 20 01:59:07.620231 kubelet[3041]: E0120 01:59:07.607585 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:59:07.620231 kubelet[3041]: E0120 01:59:07.612426 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:59:07.786003 containerd[1641]: time="2026-01-20T01:59:07.785875898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:07.790000 audit: BPF prog-id=246 op=LOAD Jan 20 01:59:07.799889 containerd[1641]: time="2026-01-20T01:59:07.799827945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:07.800147 containerd[1641]: time="2026-01-20T01:59:07.800120356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:07.800683 kubelet[3041]: E0120 01:59:07.800560 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:07.801025 kubelet[3041]: E0120 01:59:07.800896 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:07.801602 kubelet[3041]: E0120 01:59:07.801529 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:07.802128 kubelet[3041]: E0120 01:59:07.801911 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:59:07.805794 kernel: kauditd_printk_skb: 122 callbacks suppressed Jan 20 01:59:07.805933 kernel: audit: type=1334 audit(1768874347.790:724): prog-id=246 op=LOAD Jan 20 01:59:07.814000 audit: BPF prog-id=247 op=LOAD Jan 20 01:59:07.834473 kernel: audit: type=1334 audit(1768874347.814:725): prog-id=247 op=LOAD Jan 20 01:59:07.814000 audit[7209]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016a238 a2=98 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.920991 kernel: audit: type=1300 audit(1768874347.814:725): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016a238 a2=98 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:08.098952 kernel: audit: type=1327 audit(1768874347.814:725): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:08.099154 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Jan 20 01:59:08.099202 kernel: audit: type=1334 audit(1768874347.832:726): prog-id=247 op=UNLOAD Jan 20 01:59:08.115568 kernel: audit: type=1300 audit(1768874347.832:726): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.115691 kernel: audit: type=1327 audit(1768874347.832:726): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:08.115729 kernel: audit: type=1334 audit(1768874347.832:727): prog-id=248 op=LOAD Jan 20 01:59:08.115770 kernel: audit: type=1300 audit(1768874347.832:727): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016a488 a2=98 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.832000 audit: BPF prog-id=247 op=UNLOAD Jan 20 01:59:07.832000 audit[7209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:07.832000 audit: BPF prog-id=248 op=LOAD Jan 20 01:59:07.832000 audit[7209]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016a488 a2=98 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:07.832000 audit: BPF prog-id=249 op=LOAD Jan 20 01:59:07.832000 audit[7209]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00016a218 a2=98 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:07.832000 audit: BPF prog-id=249 op=UNLOAD Jan 20 01:59:07.832000 audit[7209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:07.832000 audit: BPF prog-id=248 op=UNLOAD Jan 20 01:59:07.832000 audit[7209]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:07.832000 audit: BPF prog-id=250 op=LOAD Jan 20 01:59:07.832000 audit[7209]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016a6e8 a2=98 a3=0 items=0 ppid=7033 pid=7209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663330326662616264613566376431383830653166666132633530 Jan 20 01:59:07.970000 audit: BPF prog-id=251 op=LOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf9a1a6b0 a2=98 a3=0 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=251 op=UNLOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcf9a1a680 a3=0 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=252 op=LOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf9a1a4c0 a2=94 a3=54428f items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=252 op=UNLOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcf9a1a4c0 a2=94 a3=54428f items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=253 op=LOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcf9a1a4f0 a2=94 a3=2 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=253 op=UNLOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcf9a1a4f0 a2=0 a3=2 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=254 op=LOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf9a1a2a0 a2=94 a3=4 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=254 op=UNLOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcf9a1a2a0 a2=94 a3=4 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=255 op=LOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf9a1a3a0 a2=94 a3=7ffcf9a1a520 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:07.970000 audit: BPF prog-id=255 op=UNLOAD Jan 20 01:59:07.970000 audit[7238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcf9a1a3a0 a2=0 a3=7ffcf9a1a520 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:07.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:08.319000 audit: BPF prog-id=256 op=UNLOAD Jan 20 01:59:08.319000 audit[7238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcf9a19ad0 a2=0 a3=2 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.319000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:08.319000 audit: BPF prog-id=257 op=LOAD Jan 20 01:59:08.319000 audit[7238]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcf9a19bd0 a2=94 a3=30 items=0 ppid=6477 pid=7238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.319000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 01:59:08.706902 systemd-networkd[1543]: vxlan.calico: Gained IPv6LL Jan 20 01:59:08.733328 kubelet[3041]: E0120 01:59:08.732681 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:59:08.733328 kubelet[3041]: E0120 01:59:08.732869 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:59:08.736726 kubelet[3041]: E0120 01:59:08.736617 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:59:08.746000 audit: BPF prog-id=258 op=LOAD Jan 20 01:59:08.746000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff9bcb6e20 a2=98 a3=0 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.746000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:08.783000 audit: BPF prog-id=258 op=UNLOAD Jan 20 01:59:08.783000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff9bcb6df0 a3=0 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.783000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:08.783000 audit: BPF prog-id=259 op=LOAD Jan 20 01:59:08.783000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9bcb6c10 a2=94 a3=54428f items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.783000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:08.783000 audit: BPF prog-id=259 op=UNLOAD Jan 20 01:59:08.783000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9bcb6c10 a2=94 a3=54428f items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.783000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:08.783000 audit: BPF prog-id=260 op=LOAD Jan 20 01:59:08.783000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9bcb6c40 a2=94 a3=2 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.783000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:08.783000 audit: BPF prog-id=260 op=UNLOAD Jan 20 01:59:08.783000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9bcb6c40 a2=0 a3=2 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.783000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:08.800000 audit[7249]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=7249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:08.800000 audit[7249]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6922c150 a2=0 a3=7ffd6922c13c items=0 ppid=3158 pid=7249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.800000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:08.836000 audit[7249]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=7249 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:08.836000 audit[7249]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd6922c150 a2=0 a3=0 items=0 ppid=3158 pid=7249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:08.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:08.917329 containerd[1641]: time="2026-01-20T01:59:08.917170543Z" level=info msg="StartContainer for \"3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86\" returns successfully" Jan 20 01:59:09.883425 kubelet[3041]: E0120 01:59:09.883316 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:10.035000 audit[7267]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=7267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:10.035000 audit[7267]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff9fefcec0 a2=0 a3=7fff9fefceac items=0 ppid=3158 pid=7267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:10.035000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:10.067000 audit[7267]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=7267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:10.067000 audit[7267]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff9fefcec0 a2=0 a3=0 items=0 ppid=3158 pid=7267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:10.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:10.192475 kubelet[3041]: I0120 01:59:10.184467 3041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ddv57" podStartSLOduration=335.184445844 podStartE2EDuration="5m35.184445844s" podCreationTimestamp="2026-01-20 01:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:59:10.180590154 +0000 UTC m=+336.185279050" watchObservedRunningTime="2026-01-20 01:59:10.184445844 +0000 UTC m=+336.189134740" Jan 20 01:59:10.743897 containerd[1641]: time="2026-01-20T01:59:10.743387923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:10.902490 kubelet[3041]: E0120 01:59:10.902447 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:10.967274 containerd[1641]: time="2026-01-20T01:59:10.967135190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:11.020665 containerd[1641]: time="2026-01-20T01:59:11.005508758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:11.020665 containerd[1641]: time="2026-01-20T01:59:11.005685264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:11.020948 kubelet[3041]: E0120 01:59:11.014928 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:11.020948 kubelet[3041]: E0120 01:59:11.014973 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:11.020948 kubelet[3041]: E0120 01:59:11.015053 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:11.020948 kubelet[3041]: E0120 01:59:11.015092 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:59:11.400000 audit[7269]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=7269 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:11.400000 audit[7269]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1f6b9080 a2=0 a3=7ffc1f6b906c items=0 ppid=3158 pid=7269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:11.414000 audit[7269]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=7269 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:11.414000 audit[7269]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc1f6b9080 a2=0 a3=7ffc1f6b906c items=0 ppid=3158 pid=7269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:11.733000 audit: BPF prog-id=261 op=LOAD Jan 20 01:59:11.733000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff9bcb6b00 a2=94 a3=1 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.733000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.734000 audit: BPF prog-id=261 op=UNLOAD Jan 20 01:59:11.734000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff9bcb6b00 a2=94 a3=1 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.734000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.766000 audit: BPF prog-id=262 op=LOAD Jan 20 01:59:11.766000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9bcb6af0 a2=94 a3=4 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.766000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.767000 audit: BPF prog-id=262 op=UNLOAD Jan 20 01:59:11.767000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9bcb6af0 a2=0 a3=4 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.767000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.768000 audit: BPF prog-id=263 op=LOAD Jan 20 01:59:11.768000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff9bcb6950 a2=94 a3=5 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.768000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.768000 audit: BPF prog-id=263 op=UNLOAD Jan 20 01:59:11.768000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff9bcb6950 a2=0 a3=5 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.768000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.768000 audit: BPF prog-id=264 op=LOAD Jan 20 01:59:11.768000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9bcb6b70 a2=94 a3=6 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.768000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.769000 audit: BPF prog-id=264 op=UNLOAD Jan 20 01:59:11.769000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff9bcb6b70 a2=0 a3=6 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.769000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.769000 audit: BPF prog-id=265 op=LOAD Jan 20 01:59:11.769000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff9bcb6320 a2=94 a3=88 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.769000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.770000 audit: BPF prog-id=266 op=LOAD Jan 20 01:59:11.770000 audit[7250]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff9bcb61a0 a2=94 a3=2 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.770000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.770000 audit: BPF prog-id=266 op=UNLOAD Jan 20 01:59:11.770000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff9bcb61d0 a2=0 a3=7fff9bcb62d0 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.770000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.771000 audit: BPF prog-id=265 op=UNLOAD Jan 20 01:59:11.771000 audit[7250]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=21f6bd10 a2=0 a3=e14aed5f1f27d4f2 items=0 ppid=6477 pid=7250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.771000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 01:59:11.839000 audit: BPF prog-id=257 op=UNLOAD Jan 20 01:59:11.839000 audit[6477]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000466980 a2=0 a3=0 items=0 ppid=6447 pid=6477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:11.839000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 01:59:11.918108 kubelet[3041]: E0120 01:59:11.918031 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:12.541823 containerd[1641]: time="2026-01-20T01:59:12.541411670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:59:12.592179 kubelet[3041]: E0120 01:59:12.592139 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:12.804524 containerd[1641]: time="2026-01-20T01:59:12.801453576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:12.844651 containerd[1641]: time="2026-01-20T01:59:12.844384060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:59:12.844651 containerd[1641]: time="2026-01-20T01:59:12.844558072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:12.875069 kubelet[3041]: E0120 01:59:12.874800 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:59:12.875069 kubelet[3041]: E0120 01:59:12.874868 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:59:12.875069 kubelet[3041]: E0120 01:59:12.874966 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:12.875069 kubelet[3041]: E0120 01:59:12.875008 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:59:13.117711 kernel: kauditd_printk_skb: 126 callbacks suppressed Jan 20 01:59:13.117878 kernel: audit: type=1325 audit(1768874353.094:769): table=nat:133 family=2 entries=15 op=nft_register_chain pid=7303 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:59:13.094000 audit[7303]: NETFILTER_CFG table=nat:133 family=2 entries=15 op=nft_register_chain pid=7303 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:59:13.094000 audit[7303]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcb2e72b70 a2=0 a3=7ffcb2e72b5c items=0 ppid=6477 pid=7303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:13.221404 kernel: audit: type=1300 audit(1768874353.094:769): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcb2e72b70 a2=0 a3=7ffcb2e72b5c items=0 ppid=6477 pid=7303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:13.094000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:59:13.301087 kernel: audit: type=1327 audit(1768874353.094:769): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:59:13.335000 audit[7308]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=7308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:13.417287 kernel: audit: type=1325 audit(1768874353.335:770): table=filter:134 family=2 entries=14 op=nft_register_rule pid=7308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:13.514461 kernel: audit: type=1300 audit(1768874353.335:770): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc48ef92c0 a2=0 a3=7ffc48ef92ac items=0 ppid=3158 pid=7308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:13.335000 audit[7308]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc48ef92c0 a2=0 a3=7ffc48ef92ac items=0 ppid=3158 pid=7308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:13.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:13.527671 kernel: audit: type=1327 audit(1768874353.335:770): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:13.678467 kernel: audit: type=1325 audit(1768874353.575:771): table=mangle:135 family=2 entries=16 op=nft_register_chain pid=7300 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:59:13.684187 kernel: audit: type=1300 audit(1768874353.575:771): arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff21be7d50 a2=0 a3=7fff21be7d3c items=0 ppid=6477 pid=7300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:13.575000 audit[7300]: NETFILTER_CFG table=mangle:135 family=2 entries=16 op=nft_register_chain pid=7300 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:59:13.575000 audit[7300]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff21be7d50 a2=0 a3=7fff21be7d3c items=0 ppid=6477 pid=7300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:13.688424 containerd[1641]: time="2026-01-20T01:59:13.640625648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:59:13.575000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:59:13.783107 kernel: audit: type=1327 audit(1768874353.575:771): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:59:14.031000 audit[7308]: NETFILTER_CFG table=nat:136 family=2 entries=56 op=nft_register_chain pid=7308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:14.081307 containerd[1641]: time="2026-01-20T01:59:14.071823867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:14.081490 kernel: audit: type=1325 audit(1768874354.031:772): table=nat:136 family=2 entries=56 op=nft_register_chain pid=7308 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 01:59:14.093160 containerd[1641]: time="2026-01-20T01:59:14.092981936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:59:14.093160 containerd[1641]: time="2026-01-20T01:59:14.093114462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:14.031000 audit[7308]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc48ef92c0 a2=0 a3=7ffc48ef92ac items=0 ppid=3158 pid=7308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:14.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 01:59:14.100931 kubelet[3041]: E0120 01:59:14.095451 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:14.100931 kubelet[3041]: E0120 01:59:14.095522 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:14.100931 kubelet[3041]: E0120 01:59:14.095659 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:14.113000 audit[7301]: NETFILTER_CFG table=raw:137 family=2 entries=21 op=nft_register_chain pid=7301 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:59:14.113000 audit[7301]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd137d5b10 a2=0 a3=7ffd137d5afc items=0 ppid=6477 pid=7301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:14.113000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:59:14.165650 containerd[1641]: time="2026-01-20T01:59:14.138388870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:59:14.188000 audit[7306]: NETFILTER_CFG table=filter:138 family=2 entries=350 op=nft_register_chain pid=7306 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 01:59:14.188000 audit[7306]: SYSCALL arch=c000003e syscall=46 success=yes exit=209604 a0=3 a1=7ffe41ad4aa0 a2=0 a3=56247c82e000 items=0 ppid=6477 pid=7306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 01:59:14.188000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 01:59:14.290804 containerd[1641]: time="2026-01-20T01:59:14.289312361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:14.322022 containerd[1641]: time="2026-01-20T01:59:14.318463455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:59:14.322022 containerd[1641]: time="2026-01-20T01:59:14.318634151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:14.322225 kubelet[3041]: E0120 01:59:14.319535 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:14.329733 kubelet[3041]: E0120 01:59:14.326718 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:14.329733 kubelet[3041]: E0120 01:59:14.326871 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:14.329733 kubelet[3041]: E0120 01:59:14.326924 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 01:59:14.808731 containerd[1641]: time="2026-01-20T01:59:14.805478269Z" level=info msg="container event discarded" container=9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b type=CONTAINER_CREATED_EVENT Jan 20 01:59:15.849253 containerd[1641]: time="2026-01-20T01:59:15.848663736Z" level=info msg="container event discarded" container=9cecbf8e2b3431c17fb1931fb5c6534e92f0b71854a4b68842f398ffd4fa118b type=CONTAINER_STARTED_EVENT Jan 20 01:59:19.616323 containerd[1641]: time="2026-01-20T01:59:19.610749378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:59:19.675935 kubelet[3041]: E0120 01:59:19.665910 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:59:19.831672 containerd[1641]: time="2026-01-20T01:59:19.829239523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:19.843721 containerd[1641]: time="2026-01-20T01:59:19.843654334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:19.843952 containerd[1641]: time="2026-01-20T01:59:19.843808095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:59:19.884989 kubelet[3041]: E0120 01:59:19.876524 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:19.884989 kubelet[3041]: E0120 01:59:19.876637 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:19.884989 kubelet[3041]: E0120 01:59:19.878043 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:19.884989 kubelet[3041]: E0120 01:59:19.878094 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:59:20.589511 containerd[1641]: time="2026-01-20T01:59:20.589459729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:21.062584 containerd[1641]: time="2026-01-20T01:59:21.062232445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:21.080824 containerd[1641]: time="2026-01-20T01:59:21.080624177Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:21.080824 containerd[1641]: time="2026-01-20T01:59:21.080753637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:21.083063 kubelet[3041]: E0120 01:59:21.082700 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:21.083063 kubelet[3041]: E0120 01:59:21.082781 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:21.083063 kubelet[3041]: E0120 01:59:21.082864 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:21.083063 kubelet[3041]: E0120 01:59:21.082905 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:59:22.553986 containerd[1641]: time="2026-01-20T01:59:22.553936191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:22.676713 containerd[1641]: time="2026-01-20T01:59:22.675143959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:22.677464 containerd[1641]: time="2026-01-20T01:59:22.677055067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:22.677464 containerd[1641]: time="2026-01-20T01:59:22.677393943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:22.679808 kubelet[3041]: E0120 01:59:22.677963 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:22.679808 kubelet[3041]: E0120 01:59:22.678027 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:22.679808 kubelet[3041]: E0120 01:59:22.678138 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:22.679808 kubelet[3041]: E0120 01:59:22.678181 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:59:24.556268 kubelet[3041]: E0120 01:59:24.551976 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:59:25.575575 kubelet[3041]: E0120 01:59:25.575068 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 01:59:27.549158 kubelet[3041]: E0120 01:59:27.547270 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:27.571079 kubelet[3041]: E0120 01:59:27.564845 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:59:31.550138 kubelet[3041]: E0120 01:59:31.550073 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:59:31.559039 containerd[1641]: time="2026-01-20T01:59:31.551911673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:59:31.769719 containerd[1641]: time="2026-01-20T01:59:31.765982219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:31.774629 containerd[1641]: time="2026-01-20T01:59:31.772929710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:59:31.774629 containerd[1641]: time="2026-01-20T01:59:31.773461786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:31.775314 kubelet[3041]: E0120 01:59:31.775273 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:59:31.780730 kubelet[3041]: E0120 01:59:31.780676 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:59:31.780994 kubelet[3041]: E0120 01:59:31.780928 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:31.804257 containerd[1641]: time="2026-01-20T01:59:31.804046107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:59:32.023038 containerd[1641]: time="2026-01-20T01:59:32.022920746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:32.031085 containerd[1641]: time="2026-01-20T01:59:32.030621384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:59:32.031085 containerd[1641]: time="2026-01-20T01:59:32.030779707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:32.034450 kubelet[3041]: E0120 01:59:32.033054 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:59:32.034450 kubelet[3041]: E0120 01:59:32.033132 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:59:32.034450 kubelet[3041]: E0120 01:59:32.033221 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:32.034450 kubelet[3041]: E0120 01:59:32.033271 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:59:32.548617 kubelet[3041]: E0120 01:59:32.545782 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:59:35.206889 kubelet[3041]: E0120 01:59:35.202877 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:35.570254 containerd[1641]: time="2026-01-20T01:59:35.570201158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:35.865914 containerd[1641]: time="2026-01-20T01:59:35.863564236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:35.901857 containerd[1641]: time="2026-01-20T01:59:35.899963814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:35.901857 containerd[1641]: time="2026-01-20T01:59:35.900104505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:35.902222 kubelet[3041]: E0120 01:59:35.900536 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:35.902222 kubelet[3041]: E0120 01:59:35.900766 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:35.902222 kubelet[3041]: E0120 01:59:35.901005 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:35.902222 kubelet[3041]: E0120 01:59:35.901110 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:59:37.557402 kubelet[3041]: E0120 01:59:37.556021 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:59:38.546788 kubelet[3041]: E0120 01:59:38.545088 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:39.564635 containerd[1641]: time="2026-01-20T01:59:39.560941790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:59:39.739622 containerd[1641]: time="2026-01-20T01:59:39.737725676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:39.780606 containerd[1641]: time="2026-01-20T01:59:39.769011453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:59:39.780606 containerd[1641]: time="2026-01-20T01:59:39.769140693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:39.780841 kubelet[3041]: E0120 01:59:39.776233 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:39.780841 kubelet[3041]: E0120 01:59:39.776288 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:59:39.780841 kubelet[3041]: E0120 01:59:39.776563 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:39.800316 containerd[1641]: time="2026-01-20T01:59:39.796689264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:59:40.010886 containerd[1641]: time="2026-01-20T01:59:40.004686275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:40.077089 containerd[1641]: time="2026-01-20T01:59:40.074879339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:59:40.077089 containerd[1641]: time="2026-01-20T01:59:40.079466962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:40.110779 kubelet[3041]: E0120 01:59:40.110212 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:40.110779 kubelet[3041]: E0120 01:59:40.110428 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:59:40.110779 kubelet[3041]: E0120 01:59:40.110739 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:40.111229 kubelet[3041]: E0120 01:59:40.110868 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 01:59:41.599587 containerd[1641]: time="2026-01-20T01:59:41.597548191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:59:41.802884 containerd[1641]: time="2026-01-20T01:59:41.802811254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:41.836417 containerd[1641]: time="2026-01-20T01:59:41.836220243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:59:41.838874 containerd[1641]: time="2026-01-20T01:59:41.838655947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:41.842400 kubelet[3041]: E0120 01:59:41.841854 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:59:41.842400 kubelet[3041]: E0120 01:59:41.841972 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:59:41.842400 kubelet[3041]: E0120 01:59:41.842076 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:41.842400 kubelet[3041]: E0120 01:59:41.842122 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:59:42.534970 kubelet[3041]: E0120 01:59:42.534854 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:42.581418 containerd[1641]: time="2026-01-20T01:59:42.581298568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:42.726387 containerd[1641]: time="2026-01-20T01:59:42.722843898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:42.778827 containerd[1641]: time="2026-01-20T01:59:42.778717350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:42.780589 containerd[1641]: time="2026-01-20T01:59:42.778902944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:42.783642 kubelet[3041]: E0120 01:59:42.783543 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:42.783751 kubelet[3041]: E0120 01:59:42.783641 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:42.783806 kubelet[3041]: E0120 01:59:42.783751 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:42.783845 kubelet[3041]: E0120 01:59:42.783808 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:59:43.804946 update_engine[1626]: I20260120 01:59:43.804670 1626 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 20 01:59:43.804946 update_engine[1626]: I20260120 01:59:43.804852 1626 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.840817 1626 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.864137 1626 omaha_request_params.cc:62] Current group set to beta Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.864522 1626 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.864538 1626 update_attempter.cc:643] Scheduling an action processor start. Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.864564 1626 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.864748 1626 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.864879 1626 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.867095 1626 omaha_request_action.cc:272] Request: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: Jan 20 01:59:43.867535 update_engine[1626]: I20260120 01:59:43.867112 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:59:43.903566 update_engine[1626]: I20260120 01:59:43.903489 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:59:43.918741 update_engine[1626]: I20260120 01:59:43.918471 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:59:43.993859 update_engine[1626]: E20260120 01:59:43.961100 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 01:59:43.993859 update_engine[1626]: I20260120 01:59:43.961288 1626 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 20 01:59:44.092863 locksmithd[1683]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 20 01:59:44.567913 kubelet[3041]: E0120 01:59:44.562852 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 01:59:44.593672 kubelet[3041]: E0120 01:59:44.591931 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 01:59:47.563407 containerd[1641]: time="2026-01-20T01:59:47.563181397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:59:47.721088 containerd[1641]: time="2026-01-20T01:59:47.720574935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:47.755190 containerd[1641]: time="2026-01-20T01:59:47.748860829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:59:47.755190 containerd[1641]: time="2026-01-20T01:59:47.748986582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:47.759396 kubelet[3041]: E0120 01:59:47.758969 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:47.759396 kubelet[3041]: E0120 01:59:47.759039 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:59:47.759396 kubelet[3041]: E0120 01:59:47.759131 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:47.759396 kubelet[3041]: E0120 01:59:47.759178 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 01:59:49.564204 kubelet[3041]: E0120 01:59:49.563829 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 01:59:51.560276 containerd[1641]: time="2026-01-20T01:59:51.553137380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:59:51.861042 containerd[1641]: time="2026-01-20T01:59:51.859455749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 01:59:51.878298 containerd[1641]: time="2026-01-20T01:59:51.878186310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:59:51.878560 containerd[1641]: time="2026-01-20T01:59:51.878436374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 01:59:51.883319 kubelet[3041]: E0120 01:59:51.880626 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:51.883319 kubelet[3041]: E0120 01:59:51.880691 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:59:51.883319 kubelet[3041]: E0120 01:59:51.880788 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:59:51.883319 kubelet[3041]: E0120 01:59:51.880839 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 01:59:52.535025 kubelet[3041]: E0120 01:59:52.532245 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 01:59:54.615965 update_engine[1626]: I20260120 01:59:54.606936 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 01:59:54.615965 update_engine[1626]: I20260120 01:59:54.607065 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 01:59:54.615965 update_engine[1626]: I20260120 01:59:54.607690 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 01:59:54.636885 update_engine[1626]: E20260120 01:59:54.636775 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 01:59:54.637015 update_engine[1626]: I20260120 01:59:54.636943 1626 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 20 01:59:55.607242 kubelet[3041]: E0120 01:59:55.603725 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 01:59:55.607242 kubelet[3041]: E0120 01:59:55.607161 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 01:59:56.560827 kubelet[3041]: E0120 01:59:56.560433 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:00:00.592937 kubelet[3041]: E0120 02:00:00.587686 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:00:00.607607 kubelet[3041]: E0120 02:00:00.606759 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:00:02.613605 kubelet[3041]: E0120 02:00:02.606833 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:00:03.565263 kubelet[3041]: E0120 02:00:03.542134 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:04.581680 kubelet[3041]: E0120 02:00:04.581615 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:00:04.652714 update_engine[1626]: I20260120 02:00:04.604378 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:00:04.652714 update_engine[1626]: I20260120 02:00:04.604545 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:00:04.652714 update_engine[1626]: I20260120 02:00:04.605067 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:00:04.689806 update_engine[1626]: E20260120 02:00:04.662900 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:00:04.689806 update_engine[1626]: I20260120 02:00:04.663040 1626 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 20 02:00:04.924290 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:40956.service - OpenSSH per-connection server daemon (10.0.0.1:40956). Jan 20 02:00:04.975954 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 20 02:00:04.990571 kernel: audit: type=1130 audit(1768874404.923:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.48:22-10.0.0.1:40956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:04.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.48:22-10.0.0.1:40956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:06.219000 audit[7410]: USER_ACCT pid=7410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.245208 sshd-session[7410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:06.236000 audit[7410]: CRED_ACQ pid=7410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.327150 sshd[7410]: Accepted publickey for core from 10.0.0.1 port 40956 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:06.342662 kernel: audit: type=1101 audit(1768874406.219:776): pid=7410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.342795 kernel: audit: type=1103 audit(1768874406.236:777): pid=7410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.342858 kernel: audit: type=1006 audit(1768874406.236:778): pid=7410 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 02:00:06.351600 kernel: audit: type=1300 audit(1768874406.236:778): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc067d4ea0 a2=3 a3=0 items=0 ppid=1 pid=7410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:06.236000 audit[7410]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc067d4ea0 a2=3 a3=0 items=0 ppid=1 pid=7410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:06.372438 kernel: audit: type=1327 audit(1768874406.236:778): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:06.236000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:06.424529 systemd-logind[1624]: New session 10 of user core. Jan 20 02:00:06.442647 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 02:00:06.515241 kernel: audit: type=1105 audit(1768874406.475:779): pid=7410 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.475000 audit[7410]: USER_START pid=7410 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.515000 audit[7416]: CRED_ACQ pid=7416 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.588709 kernel: audit: type=1103 audit(1768874406.515:780): pid=7416 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:06.626314 kubelet[3041]: E0120 02:00:06.617298 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:00:07.728523 sshd[7416]: Connection closed by 10.0.0.1 port 40956 Jan 20 02:00:07.733684 sshd-session[7410]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:07.790000 audit[7410]: USER_END pid=7410 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:07.822009 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:40956.service: Deactivated successfully. Jan 20 02:00:07.840837 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 02:00:07.875152 systemd-logind[1624]: Session 10 logged out. Waiting for processes to exit. Jan 20 02:00:07.897021 systemd-logind[1624]: Removed session 10. Jan 20 02:00:07.936677 kernel: audit: type=1106 audit(1768874407.790:781): pid=7410 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:07.936817 kernel: audit: type=1104 audit(1768874407.790:782): pid=7410 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:07.790000 audit[7410]: CRED_DISP pid=7410 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:07.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.48:22-10.0.0.1:40956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:10.534968 kubelet[3041]: E0120 02:00:10.533858 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:00:10.534968 kubelet[3041]: E0120 02:00:10.533997 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:00:12.881440 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:40958.service - OpenSSH per-connection server daemon (10.0.0.1:40958). Jan 20 02:00:12.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.48:22-10.0.0.1:40958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:12.916806 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:12.920248 kernel: audit: type=1130 audit(1768874412.880:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.48:22-10.0.0.1:40958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:13.235020 sshd[7434]: Accepted publickey for core from 10.0.0.1 port 40958 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:13.233000 audit[7434]: USER_ACCT pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:13.287066 sshd-session[7434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:13.319544 kernel: audit: type=1101 audit(1768874413.233:785): pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:13.238000 audit[7434]: CRED_ACQ pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:13.366565 kernel: audit: type=1103 audit(1768874413.238:786): pid=7434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:13.383663 kernel: audit: type=1006 audit(1768874413.238:787): pid=7434 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 02:00:13.238000 audit[7434]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce7ac6800 a2=3 a3=0 items=0 ppid=1 pid=7434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:13.390295 systemd-logind[1624]: New session 11 of user core. Jan 20 02:00:13.238000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:13.429890 kernel: audit: type=1300 audit(1768874413.238:787): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce7ac6800 a2=3 a3=0 items=0 ppid=1 pid=7434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:13.430030 kernel: audit: type=1327 audit(1768874413.238:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:13.459053 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 02:00:13.579744 kernel: audit: type=1105 audit(1768874413.511:788): pid=7434 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:13.511000 audit[7434]: USER_START pid=7434 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:13.595882 kubelet[3041]: E0120 02:00:13.595017 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:00:13.530000 audit[7437]: CRED_ACQ pid=7437 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:13.635164 kernel: audit: type=1103 audit(1768874413.530:789): pid=7437 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:14.427899 sshd[7437]: Connection closed by 10.0.0.1 port 40958 Jan 20 02:00:14.439728 sshd-session[7434]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:14.448000 audit[7434]: USER_END pid=7434 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:14.521475 kernel: audit: type=1106 audit(1768874414.448:790): pid=7434 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:14.519000 audit[7434]: CRED_DISP pid=7434 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:14.589665 kubelet[3041]: E0120 02:00:14.584185 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:00:14.545199 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:40958.service: Deactivated successfully. Jan 20 02:00:14.590884 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 02:00:14.596316 update_engine[1626]: I20260120 02:00:14.594578 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:00:14.600675 update_engine[1626]: I20260120 02:00:14.598781 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:00:14.602005 update_engine[1626]: I20260120 02:00:14.600975 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:00:14.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.48:22-10.0.0.1:40958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:14.615426 kernel: audit: type=1104 audit(1768874414.519:791): pid=7434 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:14.615832 systemd-logind[1624]: Session 11 logged out. Waiting for processes to exit. Jan 20 02:00:14.637960 update_engine[1626]: E20260120 02:00:14.627950 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:00:14.637960 update_engine[1626]: I20260120 02:00:14.628108 1626 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:00:14.637960 update_engine[1626]: I20260120 02:00:14.628129 1626 omaha_request_action.cc:617] Omaha request response: Jan 20 02:00:14.637960 update_engine[1626]: E20260120 02:00:14.628255 1626 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 20 02:00:14.649728 update_engine[1626]: I20260120 02:00:14.649259 1626 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 20 02:00:14.649728 update_engine[1626]: I20260120 02:00:14.649314 1626 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:00:14.649728 update_engine[1626]: I20260120 02:00:14.649324 1626 update_attempter.cc:306] Processing Done. Jan 20 02:00:14.670632 update_engine[1626]: E20260120 02:00:14.650707 1626 update_attempter.cc:619] Update failed. Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650731 1626 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650741 1626 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650751 1626 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650839 1626 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650880 1626 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650890 1626 omaha_request_action.cc:272] Request: Jan 20 02:00:14.670632 update_engine[1626]: Jan 20 02:00:14.670632 update_engine[1626]: Jan 20 02:00:14.670632 update_engine[1626]: Jan 20 02:00:14.670632 update_engine[1626]: Jan 20 02:00:14.670632 update_engine[1626]: Jan 20 02:00:14.670632 update_engine[1626]: Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650899 1626 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.650935 1626 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 20 02:00:14.670632 update_engine[1626]: I20260120 02:00:14.667557 1626 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 20 02:00:14.654688 systemd-logind[1624]: Removed session 11. Jan 20 02:00:14.671710 locksmithd[1683]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 20 02:00:14.720777 update_engine[1626]: E20260120 02:00:14.695968 1626 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 20 02:00:14.720777 update_engine[1626]: I20260120 02:00:14.701028 1626 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 20 02:00:14.720777 update_engine[1626]: I20260120 02:00:14.709811 1626 omaha_request_action.cc:617] Omaha request response: Jan 20 02:00:14.720777 update_engine[1626]: I20260120 02:00:14.709864 1626 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:00:14.720777 update_engine[1626]: I20260120 02:00:14.709875 1626 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 20 02:00:14.720777 update_engine[1626]: I20260120 02:00:14.709883 1626 update_attempter.cc:306] Processing Done. Jan 20 02:00:14.720777 update_engine[1626]: I20260120 02:00:14.709896 1626 update_attempter.cc:310] Error event sent. Jan 20 02:00:14.720777 update_engine[1626]: I20260120 02:00:14.709916 1626 update_check_scheduler.cc:74] Next update check in 46m56s Jan 20 02:00:14.721470 locksmithd[1683]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 20 02:00:15.575237 kubelet[3041]: E0120 02:00:15.547522 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:00:16.528500 kubelet[3041]: E0120 02:00:16.527031 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:16.531876 kubelet[3041]: E0120 02:00:16.529545 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:00:19.545855 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:54146.service - OpenSSH per-connection server daemon (10.0.0.1:54146). Jan 20 02:00:19.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.48:22-10.0.0.1:54146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:19.575420 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:19.575523 kernel: audit: type=1130 audit(1768874419.543:793): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.48:22-10.0.0.1:54146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:20.166000 audit[7451]: USER_ACCT pid=7451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.173857 sshd[7451]: Accepted publickey for core from 10.0.0.1 port 54146 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:20.177722 sshd-session[7451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:20.166000 audit[7451]: CRED_ACQ pid=7451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.270070 systemd-logind[1624]: New session 12 of user core. Jan 20 02:00:20.295079 kernel: audit: type=1101 audit(1768874420.166:794): pid=7451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.295225 kernel: audit: type=1103 audit(1768874420.166:795): pid=7451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.295257 kernel: audit: type=1006 audit(1768874420.175:796): pid=7451 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 02:00:20.317921 kernel: audit: type=1300 audit(1768874420.175:796): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5795e30 a2=3 a3=0 items=0 ppid=1 pid=7451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:20.175000 audit[7451]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe5795e30 a2=3 a3=0 items=0 ppid=1 pid=7451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:20.375541 kernel: audit: type=1327 audit(1768874420.175:796): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:20.175000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:20.397115 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 02:00:20.425000 audit[7451]: USER_START pid=7451 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.489876 kernel: audit: type=1105 audit(1768874420.425:797): pid=7451 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.473000 audit[7454]: CRED_ACQ pid=7454 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.541242 kernel: audit: type=1103 audit(1768874420.473:798): pid=7454 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:20.596656 containerd[1641]: time="2026-01-20T02:00:20.596154236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:00:21.227471 containerd[1641]: time="2026-01-20T02:00:21.223863277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:21.268743 containerd[1641]: time="2026-01-20T02:00:21.268679695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:00:21.269117 containerd[1641]: time="2026-01-20T02:00:21.268953304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:21.280263 kubelet[3041]: E0120 02:00:21.277903 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:00:21.280263 kubelet[3041]: E0120 02:00:21.277963 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:00:21.280263 kubelet[3041]: E0120 02:00:21.278062 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:21.288813 containerd[1641]: time="2026-01-20T02:00:21.284756077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:00:21.442592 sshd[7454]: Connection closed by 10.0.0.1 port 54146 Jan 20 02:00:21.452329 sshd-session[7451]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:21.476000 audit[7451]: USER_END pid=7451 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:21.500130 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:54146.service: Deactivated successfully. Jan 20 02:00:21.519661 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 02:00:21.612034 kernel: audit: type=1106 audit(1768874421.476:799): pid=7451 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:21.612220 kernel: audit: type=1104 audit(1768874421.476:800): pid=7451 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:21.476000 audit[7451]: CRED_DISP pid=7451 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:21.577527 systemd-logind[1624]: Session 12 logged out. Waiting for processes to exit. Jan 20 02:00:21.583118 systemd-logind[1624]: Removed session 12. Jan 20 02:00:21.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.48:22-10.0.0.1:54146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:22.009942 containerd[1641]: time="2026-01-20T02:00:22.009681684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:22.015543 containerd[1641]: time="2026-01-20T02:00:22.014653938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:00:22.015543 containerd[1641]: time="2026-01-20T02:00:22.014813383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:22.015729 kubelet[3041]: E0120 02:00:22.015031 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:00:22.015729 kubelet[3041]: E0120 02:00:22.015107 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:00:22.015729 kubelet[3041]: E0120 02:00:22.015215 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:22.015729 kubelet[3041]: E0120 02:00:22.015280 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:00:22.569200 containerd[1641]: time="2026-01-20T02:00:22.568674574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:00:23.176601 containerd[1641]: time="2026-01-20T02:00:23.165327263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:23.184907 containerd[1641]: time="2026-01-20T02:00:23.178479624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:00:23.184907 containerd[1641]: time="2026-01-20T02:00:23.180522713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:23.185099 kubelet[3041]: E0120 02:00:23.181501 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:00:23.185099 kubelet[3041]: E0120 02:00:23.181555 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:00:23.185099 kubelet[3041]: E0120 02:00:23.181641 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:23.199224 containerd[1641]: time="2026-01-20T02:00:23.198109639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:00:23.916758 containerd[1641]: time="2026-01-20T02:00:23.911775173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:23.928157 containerd[1641]: time="2026-01-20T02:00:23.927908807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:00:23.928157 containerd[1641]: time="2026-01-20T02:00:23.928050019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:23.928928 kubelet[3041]: E0120 02:00:23.928883 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:00:23.929077 kubelet[3041]: E0120 02:00:23.929053 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:00:23.929232 kubelet[3041]: E0120 02:00:23.929210 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:23.937875 kubelet[3041]: E0120 02:00:23.929318 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:00:24.600501 kubelet[3041]: E0120 02:00:24.590816 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:24.611499 containerd[1641]: time="2026-01-20T02:00:24.610967336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:00:25.348129 containerd[1641]: time="2026-01-20T02:00:25.345775430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:25.386538 containerd[1641]: time="2026-01-20T02:00:25.384876611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:00:25.386538 containerd[1641]: time="2026-01-20T02:00:25.385036267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:25.386811 kubelet[3041]: E0120 02:00:25.386486 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:25.386811 kubelet[3041]: E0120 02:00:25.386541 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:25.386811 kubelet[3041]: E0120 02:00:25.386632 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:25.386811 kubelet[3041]: E0120 02:00:25.386677 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:00:26.503234 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:44460.service - OpenSSH per-connection server daemon (10.0.0.1:44460). Jan 20 02:00:26.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.48:22-10.0.0.1:44460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:26.513571 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:26.513705 kernel: audit: type=1130 audit(1768874426.502:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.48:22-10.0.0.1:44460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:26.567446 containerd[1641]: time="2026-01-20T02:00:26.567327766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:00:26.979134 sshd[7469]: Accepted publickey for core from 10.0.0.1 port 44460 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:26.976000 audit[7469]: USER_ACCT pid=7469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.014309 sshd-session[7469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:27.045244 kernel: audit: type=1101 audit(1768874426.976:803): pid=7469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.002000 audit[7469]: CRED_ACQ pid=7469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.137433 kernel: audit: type=1103 audit(1768874427.002:804): pid=7469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.137578 kernel: audit: type=1006 audit(1768874427.002:805): pid=7469 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 02:00:27.141942 containerd[1641]: time="2026-01-20T02:00:27.141418530Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:27.145552 containerd[1641]: time="2026-01-20T02:00:27.145483131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:00:27.145678 containerd[1641]: time="2026-01-20T02:00:27.145593968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:27.147965 kubelet[3041]: E0120 02:00:27.147915 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:27.160497 kubelet[3041]: E0120 02:00:27.147978 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:27.160497 kubelet[3041]: E0120 02:00:27.148211 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:27.160497 kubelet[3041]: E0120 02:00:27.148262 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:00:27.160689 containerd[1641]: time="2026-01-20T02:00:27.159933853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:00:27.002000 audit[7469]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebc71faf0 a2=3 a3=0 items=0 ppid=1 pid=7469 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:27.002000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:27.218602 systemd-logind[1624]: New session 13 of user core. Jan 20 02:00:27.235555 kernel: audit: type=1300 audit(1768874427.002:805): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebc71faf0 a2=3 a3=0 items=0 ppid=1 pid=7469 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:27.235699 kernel: audit: type=1327 audit(1768874427.002:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:27.269685 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 02:00:27.292000 audit[7469]: USER_START pid=7469 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.374105 kernel: audit: type=1105 audit(1768874427.292:806): pid=7469 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.312000 audit[7474]: CRED_ACQ pid=7474 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.424040 kernel: audit: type=1103 audit(1768874427.312:807): pid=7474 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:27.590062 kubelet[3041]: E0120 02:00:27.580122 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:00:27.681955 containerd[1641]: time="2026-01-20T02:00:27.681659403Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:27.794933 containerd[1641]: time="2026-01-20T02:00:27.794860203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:00:27.798180 containerd[1641]: time="2026-01-20T02:00:27.795183745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:27.798565 kubelet[3041]: E0120 02:00:27.798522 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:00:27.798892 kubelet[3041]: E0120 02:00:27.798742 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:00:27.799293 kubelet[3041]: E0120 02:00:27.799263 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:27.809114 kubelet[3041]: E0120 02:00:27.808973 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:00:28.413146 sshd[7474]: Connection closed by 10.0.0.1 port 44460 Jan 20 02:00:28.420683 sshd-session[7469]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:28.431056 systemd-logind[1624]: Session 13 logged out. Waiting for processes to exit. Jan 20 02:00:28.421000 audit[7469]: USER_END pid=7469 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:28.442781 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:44460.service: Deactivated successfully. Jan 20 02:00:28.451865 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 02:00:28.459724 systemd-logind[1624]: Removed session 13. Jan 20 02:00:28.478240 kernel: audit: type=1106 audit(1768874428.421:808): pid=7469 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:28.422000 audit[7469]: CRED_DISP pid=7469 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:28.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.48:22-10.0.0.1:44460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:28.505924 kernel: audit: type=1104 audit(1768874428.422:809): pid=7469 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:29.540779 kubelet[3041]: E0120 02:00:29.540572 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:00:30.529979 kubelet[3041]: E0120 02:00:30.529676 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:33.530238 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:44462.service - OpenSSH per-connection server daemon (10.0.0.1:44462). Jan 20 02:00:33.575094 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:33.575258 kernel: audit: type=1130 audit(1768874433.534:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.48:22-10.0.0.1:44462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:33.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.48:22-10.0.0.1:44462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:34.137631 sshd[7491]: Accepted publickey for core from 10.0.0.1 port 44462 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:34.130000 audit[7491]: USER_ACCT pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.151783 sshd-session[7491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:34.200479 systemd-logind[1624]: New session 14 of user core. Jan 20 02:00:34.214090 kernel: audit: type=1101 audit(1768874434.130:812): pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.214205 kernel: audit: type=1103 audit(1768874434.149:813): pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.149000 audit[7491]: CRED_ACQ pid=7491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.286297 kernel: audit: type=1006 audit(1768874434.149:814): pid=7491 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 20 02:00:34.394582 kernel: audit: type=1300 audit(1768874434.149:814): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0bbab470 a2=3 a3=0 items=0 ppid=1 pid=7491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:34.149000 audit[7491]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0bbab470 a2=3 a3=0 items=0 ppid=1 pid=7491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:34.149000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:34.427863 kernel: audit: type=1327 audit(1768874434.149:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:34.456242 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 02:00:34.507000 audit[7491]: USER_START pid=7491 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.573703 kubelet[3041]: E0120 02:00:34.563836 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:00:34.576245 kernel: audit: type=1105 audit(1768874434.507:815): pid=7491 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.525000 audit[7522]: CRED_ACQ pid=7522 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:34.652879 kernel: audit: type=1103 audit(1768874434.525:816): pid=7522 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.200461 containerd[1641]: time="2026-01-20T02:00:35.196942537Z" level=info msg="container event discarded" container=f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070 type=CONTAINER_CREATED_EVENT Jan 20 02:00:35.200461 containerd[1641]: time="2026-01-20T02:00:35.198515854Z" level=info msg="container event discarded" container=f02559d9fd2ef6c853ee8cf380a33c7c071fc14375171b36b57cba8f8caaf070 type=CONTAINER_STARTED_EVENT Jan 20 02:00:35.205171 sshd[7522]: Connection closed by 10.0.0.1 port 44462 Jan 20 02:00:35.206245 sshd-session[7491]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:35.214000 audit[7491]: USER_END pid=7491 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.232986 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:44462.service: Deactivated successfully. Jan 20 02:00:35.270521 kernel: audit: type=1106 audit(1768874435.214:817): pid=7491 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.270661 kernel: audit: type=1104 audit(1768874435.215:818): pid=7491 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.215000 audit[7491]: CRED_DISP pid=7491 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:35.266291 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 02:00:35.291748 systemd-logind[1624]: Session 14 logged out. Waiting for processes to exit. Jan 20 02:00:35.303196 systemd-logind[1624]: Removed session 14. Jan 20 02:00:35.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.48:22-10.0.0.1:44462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:36.041624 containerd[1641]: time="2026-01-20T02:00:36.041290485Z" level=info msg="container event discarded" container=311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490 type=CONTAINER_CREATED_EVENT Jan 20 02:00:36.042551 containerd[1641]: time="2026-01-20T02:00:36.041734320Z" level=info msg="container event discarded" container=311b47360d6a73768c8da502ef29a458c665ed5734cb95f42127046fd3fdf490 type=CONTAINER_STARTED_EVENT Jan 20 02:00:37.533201 kubelet[3041]: E0120 02:00:37.532615 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:37.541682 kubelet[3041]: E0120 02:00:37.541590 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:00:40.299554 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:40.307252 kernel: audit: type=1130 audit(1768874440.280:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.48:22-10.0.0.1:54762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:40.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.48:22-10.0.0.1:54762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:40.282153 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:54762.service - OpenSSH per-connection server daemon (10.0.0.1:54762). Jan 20 02:00:40.567121 kubelet[3041]: E0120 02:00:40.539923 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:00:40.567121 kubelet[3041]: E0120 02:00:40.540590 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:00:40.791000 audit[7562]: USER_ACCT pid=7562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.800584 sshd[7562]: Accepted publickey for core from 10.0.0.1 port 54762 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:40.827891 sshd-session[7562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:40.822000 audit[7562]: CRED_ACQ pid=7562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.922854 kernel: audit: type=1101 audit(1768874440.791:821): pid=7562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.922984 kernel: audit: type=1103 audit(1768874440.822:822): pid=7562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:40.923034 kernel: audit: type=1006 audit(1768874440.822:823): pid=7562 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 02:00:40.971893 kernel: audit: type=1300 audit(1768874440.822:823): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe70bd55e0 a2=3 a3=0 items=0 ppid=1 pid=7562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:40.822000 audit[7562]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe70bd55e0 a2=3 a3=0 items=0 ppid=1 pid=7562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:40.948325 systemd-logind[1624]: New session 15 of user core. Jan 20 02:00:40.822000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:41.023920 kernel: audit: type=1327 audit(1768874440.822:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:41.068278 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 02:00:41.094000 audit[7562]: USER_START pid=7562 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.162999 kernel: audit: type=1105 audit(1768874441.094:824): pid=7562 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.178000 audit[7565]: CRED_ACQ pid=7565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.235771 kernel: audit: type=1103 audit(1768874441.178:825): pid=7565 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:41.588715 kubelet[3041]: E0120 02:00:41.588124 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:00:42.158543 sshd[7565]: Connection closed by 10.0.0.1 port 54762 Jan 20 02:00:42.167902 sshd-session[7562]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:42.170000 audit[7562]: USER_END pid=7562 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:42.195006 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:54762.service: Deactivated successfully. Jan 20 02:00:42.222191 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 02:00:42.237783 kernel: audit: type=1106 audit(1768874442.170:826): pid=7562 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:42.235086 systemd-logind[1624]: Session 15 logged out. Waiting for processes to exit. Jan 20 02:00:42.170000 audit[7562]: CRED_DISP pid=7562 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:42.276979 systemd-logind[1624]: Removed session 15. Jan 20 02:00:42.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.48:22-10.0.0.1:54762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:42.300454 kernel: audit: type=1104 audit(1768874442.170:827): pid=7562 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:42.538964 containerd[1641]: time="2026-01-20T02:00:42.538912352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:00:42.689227 containerd[1641]: time="2026-01-20T02:00:42.681161934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:42.699530 containerd[1641]: time="2026-01-20T02:00:42.697540704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:00:42.699530 containerd[1641]: time="2026-01-20T02:00:42.697661749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:42.699766 kubelet[3041]: E0120 02:00:42.697867 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:42.699766 kubelet[3041]: E0120 02:00:42.697953 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:00:42.703178 kubelet[3041]: E0120 02:00:42.701711 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:42.703178 kubelet[3041]: E0120 02:00:42.701758 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:00:42.710543 containerd[1641]: time="2026-01-20T02:00:42.707737716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:00:42.883725 containerd[1641]: time="2026-01-20T02:00:42.874666914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:00:42.918653 containerd[1641]: time="2026-01-20T02:00:42.912267597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:00:42.918653 containerd[1641]: time="2026-01-20T02:00:42.912505440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:00:42.918867 kubelet[3041]: E0120 02:00:42.913914 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:00:42.918867 kubelet[3041]: E0120 02:00:42.913974 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:00:42.918867 kubelet[3041]: E0120 02:00:42.914072 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:00:42.918867 kubelet[3041]: E0120 02:00:42.914152 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:00:45.577708 kubelet[3041]: E0120 02:00:45.576185 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:45.599710 kubelet[3041]: E0120 02:00:45.599668 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:00:47.224940 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:37778.service - OpenSSH per-connection server daemon (10.0.0.1:37778). Jan 20 02:00:47.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.48:22-10.0.0.1:37778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:47.243910 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:47.244022 kernel: audit: type=1130 audit(1768874447.223:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.48:22-10.0.0.1:37778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:47.617000 audit[7583]: USER_ACCT pid=7583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.633775 sshd-session[7583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:47.652987 sshd[7583]: Accepted publickey for core from 10.0.0.1 port 37778 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:47.671584 kernel: audit: type=1101 audit(1768874447.617:830): pid=7583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.671718 containerd[1641]: time="2026-01-20T02:00:47.671498634Z" level=info msg="container event discarded" container=5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771 type=CONTAINER_CREATED_EVENT Jan 20 02:00:47.631000 audit[7583]: CRED_ACQ pid=7583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.675991 systemd-logind[1624]: New session 16 of user core. Jan 20 02:00:47.747447 kernel: audit: type=1103 audit(1768874447.631:831): pid=7583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.855133 kernel: audit: type=1006 audit(1768874447.631:832): pid=7583 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 02:00:47.855251 kernel: audit: type=1300 audit(1768874447.631:832): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38cd3c60 a2=3 a3=0 items=0 ppid=1 pid=7583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:47.631000 audit[7583]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd38cd3c60 a2=3 a3=0 items=0 ppid=1 pid=7583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:47.802399 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 02:00:47.886459 kernel: audit: type=1327 audit(1768874447.631:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:47.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:47.947718 kernel: audit: type=1105 audit(1768874447.861:833): pid=7583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.861000 audit[7583]: USER_START pid=7583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:47.946000 audit[7586]: CRED_ACQ pid=7586 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:48.005495 kernel: audit: type=1103 audit(1768874447.946:834): pid=7586 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:48.667760 containerd[1641]: time="2026-01-20T02:00:48.664039400Z" level=info msg="container event discarded" container=5e45d3ed4a94e86231a5c265cc3e445da3d61d87295d232ca23c870a05772771 type=CONTAINER_STARTED_EVENT Jan 20 02:00:49.271557 sshd[7586]: Connection closed by 10.0.0.1 port 37778 Jan 20 02:00:49.282739 sshd-session[7583]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:49.299000 audit[7583]: USER_END pid=7583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:49.321851 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:37778.service: Deactivated successfully. Jan 20 02:00:49.333587 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 02:00:49.404450 kernel: audit: type=1106 audit(1768874449.299:835): pid=7583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:49.404611 kernel: audit: type=1104 audit(1768874449.299:836): pid=7583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:49.299000 audit[7583]: CRED_DISP pid=7583 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:49.355180 systemd-logind[1624]: Session 16 logged out. Waiting for processes to exit. Jan 20 02:00:49.394647 systemd-logind[1624]: Removed session 16. Jan 20 02:00:49.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.48:22-10.0.0.1:37778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:49.565648 kubelet[3041]: E0120 02:00:49.562169 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:00:51.562181 kubelet[3041]: E0120 02:00:51.552302 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:00:51.692499 containerd[1641]: time="2026-01-20T02:00:51.689547121Z" level=info msg="container event discarded" container=5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5 type=CONTAINER_CREATED_EVENT Jan 20 02:00:52.556323 kubelet[3041]: E0120 02:00:52.554689 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:00:53.141584 containerd[1641]: time="2026-01-20T02:00:53.134215558Z" level=info msg="container event discarded" container=5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5 type=CONTAINER_STARTED_EVENT Jan 20 02:00:54.193768 containerd[1641]: time="2026-01-20T02:00:54.193691943Z" level=info msg="container event discarded" container=5e3f129b9096e4ffd1009e9b8ef8d62a120e9af81122616c79725c0997e901b5 type=CONTAINER_STOPPED_EVENT Jan 20 02:00:54.324954 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:00:54.325097 kernel: audit: type=1130 audit(1768874454.314:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.48:22-10.0.0.1:37792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:54.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.48:22-10.0.0.1:37792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:54.316155 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:37792.service - OpenSSH per-connection server daemon (10.0.0.1:37792). Jan 20 02:00:54.566272 kubelet[3041]: E0120 02:00:54.566210 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:00:55.011000 audit[7603]: USER_ACCT pid=7603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.068725 sshd[7603]: Accepted publickey for core from 10.0.0.1 port 37792 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:00:55.037988 sshd-session[7603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:00:55.091917 kernel: audit: type=1101 audit(1768874455.011:839): pid=7603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.092078 kernel: audit: type=1103 audit(1768874455.026:840): pid=7603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.026000 audit[7603]: CRED_ACQ pid=7603 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.132788 systemd-logind[1624]: New session 17 of user core. Jan 20 02:00:55.200149 kernel: audit: type=1006 audit(1768874455.026:841): pid=7603 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 02:00:55.202825 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 02:00:55.026000 audit[7603]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde1341100 a2=3 a3=0 items=0 ppid=1 pid=7603 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:55.291142 kernel: audit: type=1300 audit(1768874455.026:841): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde1341100 a2=3 a3=0 items=0 ppid=1 pid=7603 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:00:55.291291 kernel: audit: type=1327 audit(1768874455.026:841): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:55.026000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:00:55.306085 kernel: audit: type=1105 audit(1768874455.273:842): pid=7603 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.273000 audit[7603]: USER_START pid=7603 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.300000 audit[7606]: CRED_ACQ pid=7606 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.437193 kernel: audit: type=1103 audit(1768874455.300:843): pid=7606 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:55.567528 kubelet[3041]: E0120 02:00:55.552863 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:00:55.567528 kubelet[3041]: E0120 02:00:55.564421 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:00:55.567528 kubelet[3041]: E0120 02:00:55.564326 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:00:56.607029 sshd[7606]: Connection closed by 10.0.0.1 port 37792 Jan 20 02:00:56.612425 sshd-session[7603]: pam_unix(sshd:session): session closed for user core Jan 20 02:00:56.624000 audit[7603]: USER_END pid=7603 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:56.708922 kernel: audit: type=1106 audit(1768874456.624:844): pid=7603 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:56.712647 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:37792.service: Deactivated successfully. Jan 20 02:00:56.739888 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 02:00:56.686000 audit[7603]: CRED_DISP pid=7603 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:56.813635 kernel: audit: type=1104 audit(1768874456.686:845): pid=7603 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:00:56.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.48:22-10.0.0.1:37792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:00:56.835933 systemd-logind[1624]: Session 17 logged out. Waiting for processes to exit. Jan 20 02:00:56.844415 systemd-logind[1624]: Removed session 17. Jan 20 02:00:59.531023 kubelet[3041]: E0120 02:00:59.530027 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:01.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.48:22-10.0.0.1:59076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:01.782859 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:59076.service - OpenSSH per-connection server daemon (10.0.0.1:59076). Jan 20 02:01:01.831958 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:01.832188 kernel: audit: type=1130 audit(1768874461.781:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.48:22-10.0.0.1:59076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:02.160000 audit[7623]: USER_ACCT pid=7623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.179153 sshd[7623]: Accepted publickey for core from 10.0.0.1 port 59076 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:02.200404 sshd-session[7623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:02.197000 audit[7623]: CRED_ACQ pid=7623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.234120 systemd-logind[1624]: New session 18 of user core. Jan 20 02:01:02.279429 kernel: audit: type=1101 audit(1768874462.160:848): pid=7623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.279556 kernel: audit: type=1103 audit(1768874462.197:849): pid=7623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.314561 kernel: audit: type=1006 audit(1768874462.197:850): pid=7623 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 02:01:02.314720 kernel: audit: type=1300 audit(1768874462.197:850): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc164d3e60 a2=3 a3=0 items=0 ppid=1 pid=7623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:02.197000 audit[7623]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc164d3e60 a2=3 a3=0 items=0 ppid=1 pid=7623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:02.398064 kernel: audit: type=1327 audit(1768874462.197:850): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:02.197000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:02.422117 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 02:01:02.471000 audit[7623]: USER_START pid=7623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.529813 kernel: audit: type=1105 audit(1768874462.471:851): pid=7623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.583647 kernel: audit: type=1103 audit(1768874462.497:852): pid=7626 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:02.497000 audit[7626]: CRED_ACQ pid=7626 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:03.317608 sshd[7626]: Connection closed by 10.0.0.1 port 59076 Jan 20 02:01:03.316826 sshd-session[7623]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:03.374909 kernel: audit: type=1106 audit(1768874463.333:853): pid=7623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:03.333000 audit[7623]: USER_END pid=7623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:03.391643 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:59076.service: Deactivated successfully. Jan 20 02:01:03.333000 audit[7623]: CRED_DISP pid=7623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:03.417716 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 02:01:03.436292 systemd-logind[1624]: Session 18 logged out. Waiting for processes to exit. Jan 20 02:01:03.443214 systemd-logind[1624]: Removed session 18. Jan 20 02:01:03.447675 kernel: audit: type=1104 audit(1768874463.333:854): pid=7623 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:03.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.48:22-10.0.0.1:59076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:03.546456 kubelet[3041]: E0120 02:01:03.546254 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:01:04.557209 kubelet[3041]: E0120 02:01:04.544460 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:01:06.539558 kubelet[3041]: E0120 02:01:06.539317 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:01:06.563610 kubelet[3041]: E0120 02:01:06.560327 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:01:07.560009 kubelet[3041]: E0120 02:01:07.558711 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:01:08.542923 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:08.543154 kernel: audit: type=1130 audit(1768874468.446:856): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.48:22-10.0.0.1:38322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:08.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.48:22-10.0.0.1:38322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:08.466951 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:38322.service - OpenSSH per-connection server daemon (10.0.0.1:38322). Jan 20 02:01:08.587235 kubelet[3041]: E0120 02:01:08.587135 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:01:08.667901 kubelet[3041]: E0120 02:01:08.667168 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:01:09.239000 audit[7668]: USER_ACCT pid=7668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:09.266233 sshd[7668]: Accepted publickey for core from 10.0.0.1 port 38322 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:09.290010 sshd-session[7668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:09.301765 kernel: audit: type=1101 audit(1768874469.239:857): pid=7668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:09.301898 kernel: audit: type=1103 audit(1768874469.282:858): pid=7668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:09.282000 audit[7668]: CRED_ACQ pid=7668 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:09.332192 systemd-logind[1624]: New session 19 of user core. Jan 20 02:01:09.366471 kernel: audit: type=1006 audit(1768874469.288:859): pid=7668 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 20 02:01:09.386122 kernel: audit: type=1300 audit(1768874469.288:859): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe33b03ee0 a2=3 a3=0 items=0 ppid=1 pid=7668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:09.288000 audit[7668]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe33b03ee0 a2=3 a3=0 items=0 ppid=1 pid=7668 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:09.412607 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 02:01:09.288000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:09.445171 kernel: audit: type=1327 audit(1768874469.288:859): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:09.445308 kernel: audit: type=1105 audit(1768874469.432:860): pid=7668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:09.432000 audit[7668]: USER_START pid=7668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:09.445000 audit[7671]: CRED_ACQ pid=7671 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:09.542262 kubelet[3041]: E0120 02:01:09.529829 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:09.620481 kernel: audit: type=1103 audit(1768874469.445:861): pid=7671 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:10.585643 sshd[7671]: Connection closed by 10.0.0.1 port 38322 Jan 20 02:01:10.585744 sshd-session[7668]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:10.602000 audit[7668]: USER_END pid=7668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:10.627106 kernel: audit: type=1106 audit(1768874470.602:862): pid=7668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:10.628269 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:38322.service: Deactivated successfully. Jan 20 02:01:10.654837 kernel: audit: type=1104 audit(1768874470.602:863): pid=7668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:10.602000 audit[7668]: CRED_DISP pid=7668 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:10.647066 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 02:01:10.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.48:22-10.0.0.1:38322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:10.671967 systemd-logind[1624]: Session 19 logged out. Waiting for processes to exit. Jan 20 02:01:10.700825 systemd-logind[1624]: Removed session 19. Jan 20 02:01:15.763607 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:15.763826 kernel: audit: type=1130 audit(1768874475.680:865): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.48:22-10.0.0.1:47058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:15.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.48:22-10.0.0.1:47058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:15.683114 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:47058.service - OpenSSH per-connection server daemon (10.0.0.1:47058). Jan 20 02:01:16.285000 audit[7686]: USER_ACCT pid=7686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:16.329671 sshd-session[7686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:16.380171 sshd[7686]: Accepted publickey for core from 10.0.0.1 port 47058 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:16.391839 kernel: audit: type=1101 audit(1768874476.285:866): pid=7686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:16.391972 kernel: audit: type=1103 audit(1768874476.327:867): pid=7686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:16.327000 audit[7686]: CRED_ACQ pid=7686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:16.434187 systemd-logind[1624]: New session 20 of user core. Jan 20 02:01:16.505768 kernel: audit: type=1006 audit(1768874476.327:868): pid=7686 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 20 02:01:16.327000 audit[7686]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe050c6b40 a2=3 a3=0 items=0 ppid=1 pid=7686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:16.518920 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 02:01:16.327000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:16.586988 kernel: audit: type=1300 audit(1768874476.327:868): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe050c6b40 a2=3 a3=0 items=0 ppid=1 pid=7686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:16.587834 kernel: audit: type=1327 audit(1768874476.327:868): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:16.587895 kernel: audit: type=1105 audit(1768874476.556:869): pid=7686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:16.556000 audit[7686]: USER_START pid=7686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:16.674899 kernel: audit: type=1103 audit(1768874476.580:870): pid=7689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:16.580000 audit[7689]: CRED_ACQ pid=7689 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:17.384796 sshd[7689]: Connection closed by 10.0.0.1 port 47058 Jan 20 02:01:17.382704 sshd-session[7686]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:17.402901 systemd-logind[1624]: Session 20 logged out. Waiting for processes to exit. Jan 20 02:01:17.406009 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:47058.service: Deactivated successfully. Jan 20 02:01:17.396000 audit[7686]: USER_END pid=7686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:17.444933 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 02:01:17.396000 audit[7686]: CRED_DISP pid=7686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:17.460079 kernel: audit: type=1106 audit(1768874477.396:871): pid=7686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:17.460253 kernel: audit: type=1104 audit(1768874477.396:872): pid=7686 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:17.469938 systemd-logind[1624]: Removed session 20. Jan 20 02:01:17.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.48:22-10.0.0.1:47058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:18.537112 kubelet[3041]: E0120 02:01:18.533849 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:18.539614 kubelet[3041]: E0120 02:01:18.538837 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:01:19.230932 containerd[1641]: time="2026-01-20T02:01:19.230780206Z" level=info msg="container event discarded" container=5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904 type=CONTAINER_CREATED_EVENT Jan 20 02:01:19.539761 kubelet[3041]: E0120 02:01:19.534091 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:01:19.569912 kubelet[3041]: E0120 02:01:19.554745 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:01:19.569912 kubelet[3041]: E0120 02:01:19.554988 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:01:20.180898 containerd[1641]: time="2026-01-20T02:01:20.172782398Z" level=info msg="container event discarded" container=5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904 type=CONTAINER_STARTED_EVENT Jan 20 02:01:20.532144 kubelet[3041]: E0120 02:01:20.528801 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:01:21.627819 kubelet[3041]: E0120 02:01:21.626228 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:01:22.492966 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:47072.service - OpenSSH per-connection server daemon (10.0.0.1:47072). Jan 20 02:01:22.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.48:22-10.0.0.1:47072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:22.511937 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:22.512071 kernel: audit: type=1130 audit(1768874482.488:874): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.48:22-10.0.0.1:47072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:22.584573 kubelet[3041]: E0120 02:01:22.583192 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:01:23.289870 sshd[7704]: Accepted publickey for core from 10.0.0.1 port 47072 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:23.295834 sshd-session[7704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:23.284000 audit[7704]: USER_ACCT pid=7704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:23.355069 kernel: audit: type=1101 audit(1768874483.284:875): pid=7704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:23.355185 kernel: audit: type=1103 audit(1768874483.293:876): pid=7704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:23.293000 audit[7704]: CRED_ACQ pid=7704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:23.362872 systemd-logind[1624]: New session 21 of user core. Jan 20 02:01:23.443102 kernel: audit: type=1006 audit(1768874483.293:877): pid=7704 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 02:01:23.447504 kernel: audit: type=1300 audit(1768874483.293:877): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7802c3f0 a2=3 a3=0 items=0 ppid=1 pid=7704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:23.293000 audit[7704]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7802c3f0 a2=3 a3=0 items=0 ppid=1 pid=7704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:23.293000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:23.515413 kernel: audit: type=1327 audit(1768874483.293:877): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:23.535950 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 02:01:23.560000 audit[7704]: USER_START pid=7704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:23.609899 kernel: audit: type=1105 audit(1768874483.560:878): pid=7704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:23.610052 kernel: audit: type=1103 audit(1768874483.589:879): pid=7707 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:23.589000 audit[7707]: CRED_ACQ pid=7707 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:24.431917 sshd[7707]: Connection closed by 10.0.0.1 port 47072 Jan 20 02:01:24.436149 sshd-session[7704]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:24.444000 audit[7704]: USER_END pid=7704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:24.526424 systemd-logind[1624]: Session 21 logged out. Waiting for processes to exit. Jan 20 02:01:24.527410 kernel: audit: type=1106 audit(1768874484.444:880): pid=7704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:24.453000 audit[7704]: CRED_DISP pid=7704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:24.554078 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:47072.service: Deactivated successfully. Jan 20 02:01:24.579974 kernel: audit: type=1104 audit(1768874484.453:881): pid=7704 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:24.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.48:22-10.0.0.1:47072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:24.602839 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 02:01:24.634564 systemd-logind[1624]: Removed session 21. Jan 20 02:01:26.922736 containerd[1641]: time="2026-01-20T02:01:26.922650854Z" level=info msg="container event discarded" container=5187a182b1bafc49db34a3a746f36cb8437ae0f4335fb0ca4ba251c97882c904 type=CONTAINER_STOPPED_EVENT Jan 20 02:01:29.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.48:22-10.0.0.1:44118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:29.546304 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:44118.service - OpenSSH per-connection server daemon (10.0.0.1:44118). Jan 20 02:01:29.571033 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:29.571169 kernel: audit: type=1130 audit(1768874489.544:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.48:22-10.0.0.1:44118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:30.184798 sshd[7724]: Accepted publickey for core from 10.0.0.1 port 44118 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:30.180000 audit[7724]: USER_ACCT pid=7724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:30.259299 kernel: audit: type=1101 audit(1768874490.180:884): pid=7724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:30.273000 audit[7724]: CRED_ACQ pid=7724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:30.282965 sshd-session[7724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:30.368471 systemd-logind[1624]: New session 22 of user core. Jan 20 02:01:30.375852 kernel: audit: type=1103 audit(1768874490.273:885): pid=7724 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:30.375949 kernel: audit: type=1006 audit(1768874490.273:886): pid=7724 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 02:01:30.387945 kernel: audit: type=1300 audit(1768874490.273:886): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc531077e0 a2=3 a3=0 items=0 ppid=1 pid=7724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:30.273000 audit[7724]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc531077e0 a2=3 a3=0 items=0 ppid=1 pid=7724 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:30.273000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:30.469563 kernel: audit: type=1327 audit(1768874490.273:886): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:30.502831 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 02:01:30.543483 kubelet[3041]: E0120 02:01:30.539535 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:01:30.584000 audit[7724]: USER_START pid=7724 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:30.612000 audit[7727]: CRED_ACQ pid=7727 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:30.733056 kernel: audit: type=1105 audit(1768874490.584:887): pid=7724 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:30.733270 kernel: audit: type=1103 audit(1768874490.612:888): pid=7727 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:31.543471 kubelet[3041]: E0120 02:01:31.541523 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:01:31.648582 sshd[7727]: Connection closed by 10.0.0.1 port 44118 Jan 20 02:01:31.669957 sshd-session[7724]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:31.695000 audit[7724]: USER_END pid=7724 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:31.833807 kernel: audit: type=1106 audit(1768874491.695:889): pid=7724 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:31.833971 kernel: audit: type=1104 audit(1768874491.801:890): pid=7724 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:31.801000 audit[7724]: CRED_DISP pid=7724 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:31.840917 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:44118.service: Deactivated successfully. Jan 20 02:01:31.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.48:22-10.0.0.1:44118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:31.925187 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 02:01:31.970186 systemd-logind[1624]: Session 22 logged out. Waiting for processes to exit. Jan 20 02:01:31.981975 systemd-logind[1624]: Removed session 22. Jan 20 02:01:32.540409 kubelet[3041]: E0120 02:01:32.537718 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:01:33.573527 kubelet[3041]: E0120 02:01:33.572974 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:01:34.578117 kubelet[3041]: E0120 02:01:34.578058 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:01:36.533321 kubelet[3041]: E0120 02:01:36.529626 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:01:36.567267 kubelet[3041]: E0120 02:01:36.565240 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:01:36.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.48:22-10.0.0.1:58350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:36.752189 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:58350.service - OpenSSH per-connection server daemon (10.0.0.1:58350). Jan 20 02:01:36.792247 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:36.792472 kernel: audit: type=1130 audit(1768874496.743:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.48:22-10.0.0.1:58350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:37.320850 sshd[7770]: Accepted publickey for core from 10.0.0.1 port 58350 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:37.319000 audit[7770]: USER_ACCT pid=7770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.393635 kernel: audit: type=1101 audit(1768874497.319:893): pid=7770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.354054 sshd-session[7770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:37.333000 audit[7770]: CRED_ACQ pid=7770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.441824 kernel: audit: type=1103 audit(1768874497.333:894): pid=7770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.443035 systemd-logind[1624]: New session 23 of user core. Jan 20 02:01:37.480211 kernel: audit: type=1006 audit(1768874497.334:895): pid=7770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 02:01:37.334000 audit[7770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd45a40da0 a2=3 a3=0 items=0 ppid=1 pid=7770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:37.549283 kernel: audit: type=1300 audit(1768874497.334:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd45a40da0 a2=3 a3=0 items=0 ppid=1 pid=7770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:37.548081 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 02:01:37.549598 kubelet[3041]: E0120 02:01:37.545103 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:37.334000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:37.560417 kernel: audit: type=1327 audit(1768874497.334:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:37.570000 audit[7770]: USER_START pid=7770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.590000 audit[7773]: CRED_ACQ pid=7773 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.643703 kernel: audit: type=1105 audit(1768874497.570:896): pid=7770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:37.643850 kernel: audit: type=1103 audit(1768874497.590:897): pid=7773 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:38.859819 sshd[7773]: Connection closed by 10.0.0.1 port 58350 Jan 20 02:01:38.856000 audit[7770]: USER_END pid=7770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:38.854815 sshd-session[7770]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:38.917001 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:58350.service: Deactivated successfully. Jan 20 02:01:38.921272 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 02:01:38.857000 audit[7770]: CRED_DISP pid=7770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:38.958783 systemd-logind[1624]: Session 23 logged out. Waiting for processes to exit. Jan 20 02:01:38.973058 systemd-logind[1624]: Removed session 23. Jan 20 02:01:39.012918 kernel: audit: type=1106 audit(1768874498.856:898): pid=7770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:39.013079 kernel: audit: type=1104 audit(1768874498.857:899): pid=7770 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:38.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.48:22-10.0.0.1:58350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:42.533986 kubelet[3041]: E0120 02:01:42.532811 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:42.549522 containerd[1641]: time="2026-01-20T02:01:42.536164391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:01:42.692541 containerd[1641]: time="2026-01-20T02:01:42.685070713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:42.692541 containerd[1641]: time="2026-01-20T02:01:42.688920342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:01:42.692541 containerd[1641]: time="2026-01-20T02:01:42.689050836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:42.692541 containerd[1641]: time="2026-01-20T02:01:42.690951240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:01:42.692878 kubelet[3041]: E0120 02:01:42.689475 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:01:42.692878 kubelet[3041]: E0120 02:01:42.689542 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:01:42.692878 kubelet[3041]: E0120 02:01:42.689639 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:42.853557 containerd[1641]: time="2026-01-20T02:01:42.848195272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:42.854568 containerd[1641]: time="2026-01-20T02:01:42.854038971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:01:42.854568 containerd[1641]: time="2026-01-20T02:01:42.854142373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:42.859794 kubelet[3041]: E0120 02:01:42.854916 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:01:42.859794 kubelet[3041]: E0120 02:01:42.854977 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:01:42.859794 kubelet[3041]: E0120 02:01:42.855072 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:42.860032 kubelet[3041]: E0120 02:01:42.855130 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:01:43.552838 kubelet[3041]: E0120 02:01:43.550849 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:01:43.949993 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:43.950201 kernel: audit: type=1130 audit(1768874503.939:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.48:22-10.0.0.1:58354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:43.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.48:22-10.0.0.1:58354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:43.942631 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:58354.service - OpenSSH per-connection server daemon (10.0.0.1:58354). Jan 20 02:01:44.436000 audit[7792]: USER_ACCT pid=7792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:44.464596 sshd-session[7792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:44.480134 sshd[7792]: Accepted publickey for core from 10.0.0.1 port 58354 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:44.454000 audit[7792]: CRED_ACQ pid=7792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:44.547451 systemd-logind[1624]: New session 24 of user core. Jan 20 02:01:44.594877 kernel: audit: type=1101 audit(1768874504.436:902): pid=7792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:44.595026 kernel: audit: type=1103 audit(1768874504.454:903): pid=7792 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:44.595077 kernel: audit: type=1006 audit(1768874504.454:904): pid=7792 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 02:01:44.595472 kubelet[3041]: E0120 02:01:44.595389 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:01:44.599173 kubelet[3041]: E0120 02:01:44.597019 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:01:44.454000 audit[7792]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffead290620 a2=3 a3=0 items=0 ppid=1 pid=7792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:44.715106 kernel: audit: type=1300 audit(1768874504.454:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffead290620 a2=3 a3=0 items=0 ppid=1 pid=7792 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:44.454000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:44.723502 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 02:01:44.766984 kernel: audit: type=1327 audit(1768874504.454:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:44.772000 audit[7792]: USER_START pid=7792 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:44.850560 kernel: audit: type=1105 audit(1768874504.772:905): pid=7792 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:44.820000 audit[7795]: CRED_ACQ pid=7795 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:44.967042 kernel: audit: type=1103 audit(1768874504.820:906): pid=7795 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:45.752441 sshd[7795]: Connection closed by 10.0.0.1 port 58354 Jan 20 02:01:45.751641 sshd-session[7792]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:45.764000 audit[7792]: USER_END pid=7792 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:45.772593 systemd-logind[1624]: Session 24 logged out. Waiting for processes to exit. Jan 20 02:01:45.775141 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:58354.service: Deactivated successfully. Jan 20 02:01:45.782181 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 02:01:45.791808 systemd-logind[1624]: Removed session 24. Jan 20 02:01:45.804813 kernel: audit: type=1106 audit(1768874505.764:907): pid=7792 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:45.765000 audit[7792]: CRED_DISP pid=7792 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:45.839205 kernel: audit: type=1104 audit(1768874505.765:908): pid=7792 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:45.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.48:22-10.0.0.1:58354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:46.550542 containerd[1641]: time="2026-01-20T02:01:46.543413114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:01:46.686810 containerd[1641]: time="2026-01-20T02:01:46.681541955Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:46.696181 containerd[1641]: time="2026-01-20T02:01:46.695977392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:01:46.696181 containerd[1641]: time="2026-01-20T02:01:46.696104287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:46.696596 kubelet[3041]: E0120 02:01:46.696551 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:01:46.697548 kubelet[3041]: E0120 02:01:46.697087 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:01:46.697548 kubelet[3041]: E0120 02:01:46.697184 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:46.707192 containerd[1641]: time="2026-01-20T02:01:46.706820337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:01:46.834550 containerd[1641]: time="2026-01-20T02:01:46.833437305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:46.854270 containerd[1641]: time="2026-01-20T02:01:46.852937306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:01:46.854270 containerd[1641]: time="2026-01-20T02:01:46.853086453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:46.854531 kubelet[3041]: E0120 02:01:46.853282 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:01:46.854531 kubelet[3041]: E0120 02:01:46.853424 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:01:46.854531 kubelet[3041]: E0120 02:01:46.853948 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:46.854531 kubelet[3041]: E0120 02:01:46.854057 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:01:47.583034 kubelet[3041]: E0120 02:01:47.582977 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:01:52.362080 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:38416.service - OpenSSH per-connection server daemon (10.0.0.1:38416). Jan 20 02:01:52.394085 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:01:52.394276 kernel: audit: type=1130 audit(1768874512.357:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.48:22-10.0.0.1:38416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:52.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.48:22-10.0.0.1:38416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:55.487417 kubelet[3041]: E0120 02:01:55.486006 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.451s" Jan 20 02:01:55.507404 kubelet[3041]: E0120 02:01:55.490041 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:01:55.507583 containerd[1641]: time="2026-01-20T02:01:55.493424994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:01:55.517886 kubelet[3041]: E0120 02:01:55.513251 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:01:55.584403 containerd[1641]: time="2026-01-20T02:01:55.581660799Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:55.593061 containerd[1641]: time="2026-01-20T02:01:55.589462345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:01:55.593061 containerd[1641]: time="2026-01-20T02:01:55.589600803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:55.672827 kubelet[3041]: E0120 02:01:55.671565 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:01:55.672827 kubelet[3041]: E0120 02:01:55.671665 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:01:55.672827 kubelet[3041]: E0120 02:01:55.672645 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:55.693297 kubelet[3041]: E0120 02:01:55.687692 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:01:55.750899 containerd[1641]: time="2026-01-20T02:01:55.750665199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:01:55.814000 audit[7810]: USER_ACCT pid=7810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:55.870606 kernel: audit: type=1101 audit(1768874515.814:911): pid=7810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:55.870996 sshd[7810]: Accepted publickey for core from 10.0.0.1 port 38416 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:55.904000 audit[7810]: CRED_ACQ pid=7810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:55.997045 kernel: audit: type=1103 audit(1768874515.904:912): pid=7810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:55.990000 audit[7810]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff364a1e10 a2=3 a3=0 items=0 ppid=1 pid=7810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:56.057274 kernel: audit: type=1006 audit(1768874515.990:913): pid=7810 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 02:01:56.057461 kernel: audit: type=1300 audit(1768874515.990:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff364a1e10 a2=3 a3=0 items=0 ppid=1 pid=7810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:56.057506 kernel: audit: type=1327 audit(1768874515.990:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:55.990000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:56.034806 sshd-session[7810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:56.230405 systemd-logind[1624]: New session 25 of user core. Jan 20 02:01:56.286128 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 02:01:56.381568 kernel: audit: type=1105 audit(1768874516.341:914): pid=7810 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.341000 audit[7810]: USER_START pid=7810 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.446689 kernel: audit: type=1103 audit(1768874516.386:915): pid=7816 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.386000 audit[7816]: CRED_ACQ pid=7816 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.462896 containerd[1641]: time="2026-01-20T02:01:56.461219465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:56.467676 containerd[1641]: time="2026-01-20T02:01:56.464958301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:01:56.467676 containerd[1641]: time="2026-01-20T02:01:56.465080508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:56.473188 kubelet[3041]: E0120 02:01:56.472473 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:01:56.473188 kubelet[3041]: E0120 02:01:56.472531 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:01:56.473188 kubelet[3041]: E0120 02:01:56.472719 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:56.489175 kubelet[3041]: E0120 02:01:56.480853 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:01:56.513128 containerd[1641]: time="2026-01-20T02:01:56.513066065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:01:56.651009 containerd[1641]: time="2026-01-20T02:01:56.647916202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:01:56.656873 containerd[1641]: time="2026-01-20T02:01:56.656632535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:01:56.656873 containerd[1641]: time="2026-01-20T02:01:56.656793605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:01:56.657116 kubelet[3041]: E0120 02:01:56.656994 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:01:56.657116 kubelet[3041]: E0120 02:01:56.657096 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:01:56.657298 kubelet[3041]: E0120 02:01:56.657223 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:01:56.657454 kubelet[3041]: E0120 02:01:56.657302 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:01:56.783314 sshd[7816]: Connection closed by 10.0.0.1 port 38416 Jan 20 02:01:56.786263 sshd-session[7810]: pam_unix(sshd:session): session closed for user core Jan 20 02:01:56.789000 audit[7810]: USER_END pid=7810 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.815863 kernel: audit: type=1106 audit(1768874516.789:916): pid=7810 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.816008 kernel: audit: type=1104 audit(1768874516.789:917): pid=7810 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.789000 audit[7810]: CRED_DISP pid=7810 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:56.831628 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:38416.service: Deactivated successfully. Jan 20 02:01:56.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.48:22-10.0.0.1:38416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:56.839218 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 02:01:56.844695 systemd-logind[1624]: Session 25 logged out. Waiting for processes to exit. Jan 20 02:01:56.853992 systemd-logind[1624]: Removed session 25. Jan 20 02:01:56.858266 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:37940.service - OpenSSH per-connection server daemon (10.0.0.1:37940). Jan 20 02:01:56.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.48:22-10.0.0.1:37940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:01:57.182000 audit[7837]: USER_ACCT pid=7837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:57.192557 sshd[7837]: Accepted publickey for core from 10.0.0.1 port 37940 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:01:57.190000 audit[7837]: CRED_ACQ pid=7837 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:57.190000 audit[7837]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff28030500 a2=3 a3=0 items=0 ppid=1 pid=7837 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:01:57.190000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:01:57.199637 sshd-session[7837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:01:57.261166 systemd-logind[1624]: New session 26 of user core. Jan 20 02:01:57.441917 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 02:01:58.171917 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 02:01:58.176131 kernel: audit: type=1105 audit(1768874518.144:923): pid=7837 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:58.144000 audit[7837]: USER_START pid=7837 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:58.707000 audit[7840]: CRED_ACQ pid=7840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:01:59.020947 kernel: audit: type=1103 audit(1768874518.707:924): pid=7840 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:01.301021 kubelet[3041]: E0120 02:02:01.299971 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:02:08.748858 systemd[1]: cri-containerd-6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6.scope: Deactivated successfully. Jan 20 02:02:08.776870 systemd[1]: cri-containerd-6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6.scope: Consumed 12.728s CPU time, 80.2M memory peak, 25.5M read from disk. Jan 20 02:02:08.825752 containerd[1641]: time="2026-01-20T02:02:08.825689785Z" level=info msg="received container exit event container_id:\"6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6\" id:\"6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6\" pid:5509 exit_status:1 exited_at:{seconds:1768874528 nanos:814461496}" Jan 20 02:02:08.921295 kernel: audit: type=1334 audit(1768874528.843:925): prog-id=173 op=UNLOAD Jan 20 02:02:08.921482 kernel: audit: type=1334 audit(1768874528.843:926): prog-id=177 op=UNLOAD Jan 20 02:02:08.843000 audit: BPF prog-id=173 op=UNLOAD Jan 20 02:02:08.843000 audit: BPF prog-id=177 op=UNLOAD Jan 20 02:02:09.078821 kubelet[3041]: E0120 02:02:09.078684 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.998s" Jan 20 02:02:09.083087 kubelet[3041]: E0120 02:02:09.083046 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:02:09.084697 containerd[1641]: time="2026-01-20T02:02:09.084662955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:02:09.086105 kubelet[3041]: E0120 02:02:09.085971 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:09.103187 kubelet[3041]: E0120 02:02:09.102929 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:02:09.109761 kubelet[3041]: E0120 02:02:09.109568 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:02:09.310062 systemd[1]: cri-containerd-641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499.scope: Deactivated successfully. Jan 20 02:02:09.340577 containerd[1641]: time="2026-01-20T02:02:09.330709167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:02:09.343405 containerd[1641]: time="2026-01-20T02:02:09.342489645Z" level=info msg="received container exit event container_id:\"641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499\" id:\"641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499\" pid:2857 exit_status:1 exited_at:{seconds:1768874529 nanos:334181578}" Jan 20 02:02:09.404000 audit: BPF prog-id=267 op=LOAD Jan 20 02:02:09.420542 kubelet[3041]: E0120 02:02:09.394010 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:02:09.420542 kubelet[3041]: E0120 02:02:09.394078 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:02:09.420542 kubelet[3041]: E0120 02:02:09.394611 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:02:09.420542 kubelet[3041]: E0120 02:02:09.394663 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:02:09.404576 systemd[1]: cri-containerd-641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499.scope: Consumed 16.516s CPU time, 34.9M memory peak, 10.3M read from disk. Jan 20 02:02:09.420847 containerd[1641]: time="2026-01-20T02:02:09.379740389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:02:09.420847 containerd[1641]: time="2026-01-20T02:02:09.379877295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:02:09.424530 kernel: audit: type=1334 audit(1768874529.404:927): prog-id=267 op=LOAD Jan 20 02:02:09.424594 kernel: audit: type=1334 audit(1768874529.404:928): prog-id=83 op=UNLOAD Jan 20 02:02:09.404000 audit: BPF prog-id=83 op=UNLOAD Jan 20 02:02:09.431000 audit: BPF prog-id=98 op=UNLOAD Jan 20 02:02:09.445062 kernel: audit: type=1334 audit(1768874529.431:929): prog-id=98 op=UNLOAD Jan 20 02:02:09.445766 kernel: audit: type=1334 audit(1768874529.431:930): prog-id=102 op=UNLOAD Jan 20 02:02:09.431000 audit: BPF prog-id=102 op=UNLOAD Jan 20 02:02:09.497622 kubelet[3041]: E0120 02:02:09.497566 3041 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice/cri-containerd-641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499.scope\": RecentStats: unable to find data in memory cache]" Jan 20 02:02:09.542423 sshd[7840]: Connection closed by 10.0.0.1 port 37940 Jan 20 02:02:09.537860 sshd-session[7837]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:09.565000 audit[7837]: USER_END pid=7837 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:09.618687 kernel: audit: type=1106 audit(1768874529.565:931): pid=7837 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:09.565000 audit[7837]: CRED_DISP pid=7837 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:09.684510 kernel: audit: type=1104 audit(1768874529.565:932): pid=7837 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:09.713537 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:37940.service: Deactivated successfully. Jan 20 02:02:09.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.48:22-10.0.0.1:37940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:09.777223 kernel: audit: type=1131 audit(1768874529.710:933): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.48:22-10.0.0.1:37940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:09.742284 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 02:02:09.753322 systemd[1]: session-26.scope: Consumed 2.049s CPU time, 26.6M memory peak. Jan 20 02:02:09.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.48:22-10.0.0.1:53710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:09.841064 systemd-logind[1624]: Session 26 logged out. Waiting for processes to exit. Jan 20 02:02:09.844627 systemd[1]: Started sshd@26-10.0.0.48:22-10.0.0.1:53710.service - OpenSSH per-connection server daemon (10.0.0.1:53710). Jan 20 02:02:09.889289 systemd-logind[1624]: Removed session 26. Jan 20 02:02:09.908424 kernel: audit: type=1130 audit(1768874529.843:934): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.48:22-10.0.0.1:53710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:10.009888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6-rootfs.mount: Deactivated successfully. Jan 20 02:02:10.587569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499-rootfs.mount: Deactivated successfully. Jan 20 02:02:11.144000 audit[7899]: USER_ACCT pid=7899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:11.176550 sshd[7899]: Accepted publickey for core from 10.0.0.1 port 53710 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:02:11.174000 audit[7899]: CRED_ACQ pid=7899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:11.175000 audit[7899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd94089430 a2=3 a3=0 items=0 ppid=1 pid=7899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:11.175000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:11.176912 sshd-session[7899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:11.239464 kubelet[3041]: I0120 02:02:11.234683 3041 scope.go:117] "RemoveContainer" containerID="24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0" Jan 20 02:02:11.239464 kubelet[3041]: I0120 02:02:11.235278 3041 scope.go:117] "RemoveContainer" containerID="6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6" Jan 20 02:02:11.239464 kubelet[3041]: E0120 02:02:11.235431 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:11.247671 systemd-logind[1624]: New session 27 of user core. Jan 20 02:02:11.278281 kubelet[3041]: E0120 02:02:11.277503 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(5bbfee13ce9e07281eca876a0b8067f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="5bbfee13ce9e07281eca876a0b8067f2" Jan 20 02:02:11.282091 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 02:02:11.304170 containerd[1641]: time="2026-01-20T02:02:11.293742162Z" level=info msg="RemoveContainer for \"24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0\"" Jan 20 02:02:11.328000 audit[7899]: USER_START pid=7899 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:11.428000 audit[7919]: CRED_ACQ pid=7919 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:11.459290 kubelet[3041]: I0120 02:02:11.449876 3041 scope.go:117] "RemoveContainer" containerID="641b4acdaaa732a6d17daf03a1bb20793f76e4b65f442546c9e29c9ba50c4499" Jan 20 02:02:11.460590 kubelet[3041]: E0120 02:02:11.460302 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:11.620062 kubelet[3041]: E0120 02:02:11.614422 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:02:11.702416 kubelet[3041]: E0120 02:02:11.697471 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:02:11.738229 containerd[1641]: time="2026-01-20T02:02:11.734588497Z" level=info msg="CreateContainer within sandbox \"48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 02:02:11.748754 containerd[1641]: time="2026-01-20T02:02:11.747018387Z" level=info msg="RemoveContainer for \"24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0\" returns successfully" Jan 20 02:02:12.213449 containerd[1641]: time="2026-01-20T02:02:12.201494482Z" level=info msg="Container 17d25f84fa8b8b1d365144dfe262830c2c8b86d16ec77154e2c88854cb958886: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:02:12.548209 containerd[1641]: time="2026-01-20T02:02:12.548157090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:02:12.672847 containerd[1641]: time="2026-01-20T02:02:12.672782508Z" level=info msg="CreateContainer within sandbox \"48b2b77fb82837cde656a886385ab9634d0ce32f3a64ab89eb9fad568e2afc3f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"17d25f84fa8b8b1d365144dfe262830c2c8b86d16ec77154e2c88854cb958886\"" Jan 20 02:02:12.674169 containerd[1641]: time="2026-01-20T02:02:12.674086201Z" level=info msg="StartContainer for \"17d25f84fa8b8b1d365144dfe262830c2c8b86d16ec77154e2c88854cb958886\"" Jan 20 02:02:12.676306 containerd[1641]: time="2026-01-20T02:02:12.676263155Z" level=info msg="connecting to shim 17d25f84fa8b8b1d365144dfe262830c2c8b86d16ec77154e2c88854cb958886" address="unix:///run/containerd/s/8657815239ac6d140d4580d8c2fab613500819d04b2f01f542d434354516f387" protocol=ttrpc version=3 Jan 20 02:02:12.879570 containerd[1641]: time="2026-01-20T02:02:12.874481804Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:02:12.893124 containerd[1641]: time="2026-01-20T02:02:12.887806789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:02:12.893124 containerd[1641]: time="2026-01-20T02:02:12.889622036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:02:12.893438 kubelet[3041]: E0120 02:02:12.890476 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:02:12.893438 kubelet[3041]: E0120 02:02:12.890618 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:02:12.893438 kubelet[3041]: E0120 02:02:12.890708 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:02:12.893438 kubelet[3041]: E0120 02:02:12.890749 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:02:13.055396 sshd[7919]: Connection closed by 10.0.0.1 port 53710 Jan 20 02:02:13.062537 sshd-session[7899]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:13.085000 audit[7899]: USER_END pid=7899 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:13.098000 audit[7899]: CRED_DISP pid=7899 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:13.172555 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:53710.service: Deactivated successfully. Jan 20 02:02:13.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.48:22-10.0.0.1:53710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:13.213687 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 02:02:13.271696 systemd-logind[1624]: Session 27 logged out. Waiting for processes to exit. Jan 20 02:02:13.384043 systemd-logind[1624]: Removed session 27. Jan 20 02:02:13.460193 systemd[1]: Started cri-containerd-17d25f84fa8b8b1d365144dfe262830c2c8b86d16ec77154e2c88854cb958886.scope - libcontainer container 17d25f84fa8b8b1d365144dfe262830c2c8b86d16ec77154e2c88854cb958886. Jan 20 02:02:13.608000 audit: BPF prog-id=268 op=LOAD Jan 20 02:02:13.622000 audit: BPF prog-id=269 op=LOAD Jan 20 02:02:13.622000 audit[7930]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000268238 a2=98 a3=0 items=0 ppid=2705 pid=7930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:13.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643235663834666138623862316433363531343464666532363238 Jan 20 02:02:13.625000 audit: BPF prog-id=269 op=UNLOAD Jan 20 02:02:13.625000 audit[7930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=7930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:13.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643235663834666138623862316433363531343464666532363238 Jan 20 02:02:13.628000 audit: BPF prog-id=270 op=LOAD Jan 20 02:02:13.628000 audit[7930]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000268488 a2=98 a3=0 items=0 ppid=2705 pid=7930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:13.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643235663834666138623862316433363531343464666532363238 Jan 20 02:02:13.631000 audit: BPF prog-id=271 op=LOAD Jan 20 02:02:13.631000 audit[7930]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000268218 a2=98 a3=0 items=0 ppid=2705 pid=7930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:13.631000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643235663834666138623862316433363531343464666532363238 Jan 20 02:02:13.632000 audit: BPF prog-id=271 op=UNLOAD Jan 20 02:02:13.632000 audit[7930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=7930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:13.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643235663834666138623862316433363531343464666532363238 Jan 20 02:02:13.632000 audit: BPF prog-id=270 op=UNLOAD Jan 20 02:02:13.632000 audit[7930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2705 pid=7930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:13.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643235663834666138623862316433363531343464666532363238 Jan 20 02:02:13.632000 audit: BPF prog-id=272 op=LOAD Jan 20 02:02:13.632000 audit[7930]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002686e8 a2=98 a3=0 items=0 ppid=2705 pid=7930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:13.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643235663834666138623862316433363531343464666532363238 Jan 20 02:02:14.097264 containerd[1641]: time="2026-01-20T02:02:14.095628188Z" level=info msg="StartContainer for \"17d25f84fa8b8b1d365144dfe262830c2c8b86d16ec77154e2c88854cb958886\" returns successfully" Jan 20 02:02:14.865795 kubelet[3041]: E0120 02:02:14.863312 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:15.560999 kubelet[3041]: E0120 02:02:15.557015 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:15.893579 kubelet[3041]: E0120 02:02:15.885425 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:16.917902 kubelet[3041]: E0120 02:02:16.912961 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:18.081963 systemd[1]: Started sshd@27-10.0.0.48:22-10.0.0.1:44238.service - OpenSSH per-connection server daemon (10.0.0.1:44238). Jan 20 02:02:18.090693 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 02:02:18.090772 kernel: audit: type=1130 audit(1768874538.080:951): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.48:22-10.0.0.1:44238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:18.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.48:22-10.0.0.1:44238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:18.131328 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Jan 20 02:02:18.660185 kubelet[3041]: E0120 02:02:18.635731 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:18.917771 systemd-tmpfiles[7972]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 02:02:18.917811 systemd-tmpfiles[7972]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 02:02:18.924973 systemd-tmpfiles[7972]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 02:02:19.024673 systemd-tmpfiles[7972]: ACLs are not supported, ignoring. Jan 20 02:02:19.035684 systemd-tmpfiles[7972]: ACLs are not supported, ignoring. Jan 20 02:02:19.100000 audit[7971]: USER_ACCT pid=7971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.143437 kernel: audit: type=1101 audit(1768874539.100:952): pid=7971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.157495 sshd[7971]: Accepted publickey for core from 10.0.0.1 port 44238 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:02:19.160000 audit[7971]: CRED_ACQ pid=7971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.204847 kernel: audit: type=1103 audit(1768874539.160:953): pid=7971 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.227287 sshd-session[7971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:19.286538 kernel: audit: type=1006 audit(1768874539.225:954): pid=7971 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 20 02:02:19.286721 kernel: audit: type=1300 audit(1768874539.225:954): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6dd3caa0 a2=3 a3=0 items=0 ppid=1 pid=7971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:19.225000 audit[7971]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff6dd3caa0 a2=3 a3=0 items=0 ppid=1 pid=7971 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:19.316845 systemd-logind[1624]: New session 28 of user core. Jan 20 02:02:19.331788 kernel: audit: type=1327 audit(1768874539.225:954): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:19.225000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:19.353972 systemd-tmpfiles[7972]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 02:02:19.354902 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 02:02:19.362219 systemd-tmpfiles[7972]: Skipping /boot Jan 20 02:02:19.397000 audit[7971]: USER_START pid=7971 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.458479 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jan 20 02:02:19.460567 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Jan 20 02:02:19.616166 kernel: audit: type=1105 audit(1768874539.397:955): pid=7971 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.623645 kernel: audit: type=1103 audit(1768874539.443:956): pid=7977 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.443000 audit[7977]: CRED_ACQ pid=7977 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:19.731975 kernel: audit: type=1130 audit(1768874539.459:957): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:19.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:19.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:19.878075 kernel: audit: type=1131 audit(1768874539.459:958): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:20.645476 kubelet[3041]: I0120 02:02:20.636448 3041 scope.go:117] "RemoveContainer" containerID="6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6" Jan 20 02:02:20.645476 kubelet[3041]: E0120 02:02:20.642699 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:20.776434 containerd[1641]: time="2026-01-20T02:02:20.773810047Z" level=info msg="CreateContainer within sandbox \"0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Jan 20 02:02:21.177447 containerd[1641]: time="2026-01-20T02:02:21.167492064Z" level=info msg="Container 016515f4e4d411ac607520008855839ecdb701a34e4062f6f31e31c3e2d4303a: CDI devices from CRI Config.CDIDevices: []" Jan 20 02:02:21.352672 containerd[1641]: time="2026-01-20T02:02:21.349871060Z" level=info msg="CreateContainer within sandbox \"0ca5b42d2d739a8dc0fbfca8cc0c752654e222f15185893171f58575f121c1ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"016515f4e4d411ac607520008855839ecdb701a34e4062f6f31e31c3e2d4303a\"" Jan 20 02:02:21.382671 containerd[1641]: time="2026-01-20T02:02:21.382408294Z" level=info msg="StartContainer for \"016515f4e4d411ac607520008855839ecdb701a34e4062f6f31e31c3e2d4303a\"" Jan 20 02:02:21.420222 containerd[1641]: time="2026-01-20T02:02:21.409903936Z" level=info msg="connecting to shim 016515f4e4d411ac607520008855839ecdb701a34e4062f6f31e31c3e2d4303a" address="unix:///run/containerd/s/0a3f5563c8ff7307acacc838c5a88e501fbbb018b2bb68d1db9fb5ba52ad8acb" protocol=ttrpc version=3 Jan 20 02:02:21.574136 kubelet[3041]: E0120 02:02:21.565937 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:02:22.164817 systemd[1]: Started cri-containerd-016515f4e4d411ac607520008855839ecdb701a34e4062f6f31e31c3e2d4303a.scope - libcontainer container 016515f4e4d411ac607520008855839ecdb701a34e4062f6f31e31c3e2d4303a. Jan 20 02:02:22.249325 sshd[7977]: Connection closed by 10.0.0.1 port 44238 Jan 20 02:02:22.258976 sshd-session[7971]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:22.270000 audit[7971]: USER_END pid=7971 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:22.270000 audit[7971]: CRED_DISP pid=7971 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:22.276987 systemd[1]: sshd@27-10.0.0.48:22-10.0.0.1:44238.service: Deactivated successfully. Jan 20 02:02:22.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.48:22-10.0.0.1:44238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:22.315439 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 02:02:22.334436 systemd-logind[1624]: Session 28 logged out. Waiting for processes to exit. Jan 20 02:02:22.352844 systemd-logind[1624]: Removed session 28. Jan 20 02:02:22.385000 audit: BPF prog-id=273 op=LOAD Jan 20 02:02:22.387000 audit: BPF prog-id=274 op=LOAD Jan 20 02:02:22.387000 audit[7987]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2725 pid=7987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:22.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363531356634653464343131616336303735323030303838353538 Jan 20 02:02:22.387000 audit: BPF prog-id=274 op=UNLOAD Jan 20 02:02:22.387000 audit[7987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=7987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:22.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363531356634653464343131616336303735323030303838353538 Jan 20 02:02:22.387000 audit: BPF prog-id=275 op=LOAD Jan 20 02:02:22.387000 audit[7987]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2725 pid=7987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:22.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363531356634653464343131616336303735323030303838353538 Jan 20 02:02:22.387000 audit: BPF prog-id=276 op=LOAD Jan 20 02:02:22.387000 audit[7987]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2725 pid=7987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:22.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363531356634653464343131616336303735323030303838353538 Jan 20 02:02:22.387000 audit: BPF prog-id=276 op=UNLOAD Jan 20 02:02:22.387000 audit[7987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=7987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:22.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363531356634653464343131616336303735323030303838353538 Jan 20 02:02:22.387000 audit: BPF prog-id=275 op=UNLOAD Jan 20 02:02:22.387000 audit[7987]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2725 pid=7987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:22.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363531356634653464343131616336303735323030303838353538 Jan 20 02:02:22.387000 audit: BPF prog-id=277 op=LOAD Jan 20 02:02:22.387000 audit[7987]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2725 pid=7987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:22.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363531356634653464343131616336303735323030303838353538 Jan 20 02:02:22.537426 kubelet[3041]: E0120 02:02:22.537281 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:02:23.105618 containerd[1641]: time="2026-01-20T02:02:23.104720106Z" level=info msg="StartContainer for \"016515f4e4d411ac607520008855839ecdb701a34e4062f6f31e31c3e2d4303a\" returns successfully" Jan 20 02:02:23.433943 kubelet[3041]: E0120 02:02:23.432893 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:23.565924 kubelet[3041]: E0120 02:02:23.565871 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:02:23.569645 kubelet[3041]: E0120 02:02:23.569060 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:02:23.577156 kubelet[3041]: E0120 02:02:23.576877 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:02:24.575722 kubelet[3041]: E0120 02:02:24.575618 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:02:27.319919 systemd[1]: Started sshd@28-10.0.0.48:22-10.0.0.1:60504.service - OpenSSH per-connection server daemon (10.0.0.1:60504). Jan 20 02:02:27.385609 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 20 02:02:27.389288 kernel: audit: type=1130 audit(1768874547.319:970): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.48:22-10.0.0.1:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:27.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.48:22-10.0.0.1:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:27.926410 kernel: audit: type=1101 audit(1768874547.891:971): pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:27.891000 audit[8028]: USER_ACCT pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:27.910800 sshd-session[8028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:27.929591 sshd[8028]: Accepted publickey for core from 10.0.0.1 port 60504 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:02:27.903000 audit[8028]: CRED_ACQ pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:28.026468 kernel: audit: type=1103 audit(1768874547.903:972): pid=8028 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:28.026599 kernel: audit: type=1006 audit(1768874547.903:973): pid=8028 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 20 02:02:28.023113 systemd-logind[1624]: New session 29 of user core. Jan 20 02:02:27.903000 audit[8028]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6217e990 a2=3 a3=0 items=0 ppid=1 pid=8028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:28.118176 kernel: audit: type=1300 audit(1768874547.903:973): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6217e990 a2=3 a3=0 items=0 ppid=1 pid=8028 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:27.903000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:28.147601 kernel: audit: type=1327 audit(1768874547.903:973): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:28.151660 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 20 02:02:28.188000 audit[8028]: USER_START pid=8028 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:28.210000 audit[8033]: CRED_ACQ pid=8033 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:28.271126 kernel: audit: type=1105 audit(1768874548.188:974): pid=8028 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:28.271292 kernel: audit: type=1103 audit(1768874548.210:975): pid=8033 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:28.592269 kubelet[3041]: E0120 02:02:28.586291 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:02:28.780174 kubelet[3041]: E0120 02:02:28.778476 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:29.217601 sshd[8033]: Connection closed by 10.0.0.1 port 60504 Jan 20 02:02:29.225478 sshd-session[8028]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:29.229000 audit[8028]: USER_END pid=8028 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:29.276689 systemd[1]: sshd@28-10.0.0.48:22-10.0.0.1:60504.service: Deactivated successfully. Jan 20 02:02:29.300503 systemd[1]: session-29.scope: Deactivated successfully. Jan 20 02:02:29.308829 kernel: audit: type=1106 audit(1768874549.229:976): pid=8028 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:29.317127 systemd-logind[1624]: Session 29 logged out. Waiting for processes to exit. Jan 20 02:02:29.325493 systemd-logind[1624]: Removed session 29. Jan 20 02:02:29.235000 audit[8028]: CRED_DISP pid=8028 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:29.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.48:22-10.0.0.1:60504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:29.371602 kernel: audit: type=1104 audit(1768874549.235:977): pid=8028 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:29.538388 kubelet[3041]: E0120 02:02:29.535534 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:29.697457 kubelet[3041]: E0120 02:02:29.697389 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:29.875152 kubelet[3041]: E0120 02:02:29.863930 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:33.549013 kubelet[3041]: E0120 02:02:33.542832 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:02:34.383480 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:34.383661 kernel: audit: type=1130 audit(1768874554.353:979): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.48:22-10.0.0.1:60510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:34.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.48:22-10.0.0.1:60510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:34.361628 systemd[1]: Started sshd@29-10.0.0.48:22-10.0.0.1:60510.service - OpenSSH per-connection server daemon (10.0.0.1:60510). Jan 20 02:02:34.581195 kubelet[3041]: E0120 02:02:34.578893 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:02:34.833000 audit[8074]: USER_ACCT pid=8074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:34.868449 sshd[8074]: Accepted publickey for core from 10.0.0.1 port 60510 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:02:34.906000 audit[8074]: CRED_ACQ pid=8074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:34.915453 sshd-session[8074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:34.973331 kernel: audit: type=1101 audit(1768874554.833:980): pid=8074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:34.973513 kernel: audit: type=1103 audit(1768874554.906:981): pid=8074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:35.023441 kernel: audit: type=1006 audit(1768874554.906:982): pid=8074 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 20 02:02:35.024258 kernel: audit: type=1300 audit(1768874554.906:982): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdca208f60 a2=3 a3=0 items=0 ppid=1 pid=8074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:34.906000 audit[8074]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdca208f60 a2=3 a3=0 items=0 ppid=1 pid=8074 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:35.024685 systemd-logind[1624]: New session 30 of user core. Jan 20 02:02:35.087446 kernel: audit: type=1327 audit(1768874554.906:982): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:34.906000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:35.113032 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 20 02:02:35.125000 audit[8074]: USER_START pid=8074 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:35.276447 kernel: audit: type=1105 audit(1768874555.125:983): pid=8074 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:35.276573 kernel: audit: type=1103 audit(1768874555.144:984): pid=8077 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:35.144000 audit[8077]: CRED_ACQ pid=8077 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:35.571397 kubelet[3041]: E0120 02:02:35.570452 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:02:35.635923 kubelet[3041]: E0120 02:02:35.633406 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:02:36.094328 sshd[8077]: Connection closed by 10.0.0.1 port 60510 Jan 20 02:02:36.101646 sshd-session[8074]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:36.124000 audit[8074]: USER_END pid=8074 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:36.124000 audit[8074]: CRED_DISP pid=8074 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:36.176562 systemd[1]: sshd@29-10.0.0.48:22-10.0.0.1:60510.service: Deactivated successfully. Jan 20 02:02:36.218062 kernel: audit: type=1106 audit(1768874556.124:985): pid=8074 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:36.218156 kernel: audit: type=1104 audit(1768874556.124:986): pid=8074 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:36.188548 systemd-logind[1624]: Session 30 logged out. Waiting for processes to exit. Jan 20 02:02:36.213754 systemd[1]: session-30.scope: Deactivated successfully. Jan 20 02:02:36.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.48:22-10.0.0.1:60510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:36.280202 systemd-logind[1624]: Removed session 30. Jan 20 02:02:36.549152 kubelet[3041]: E0120 02:02:36.547566 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:02:39.532141 kubelet[3041]: E0120 02:02:39.531871 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:02:39.737729 kubelet[3041]: E0120 02:02:39.737689 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:41.172656 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:41.172805 kernel: audit: type=1130 audit(1768874561.161:988): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.48:22-10.0.0.1:56440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:41.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.48:22-10.0.0.1:56440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:41.159598 systemd[1]: Started sshd@30-10.0.0.48:22-10.0.0.1:56440.service - OpenSSH per-connection server daemon (10.0.0.1:56440). Jan 20 02:02:41.824559 sshd[8092]: Accepted publickey for core from 10.0.0.1 port 56440 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:02:41.821000 audit[8092]: USER_ACCT pid=8092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:41.881980 sshd-session[8092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:41.910110 kernel: audit: type=1101 audit(1768874561.821:989): pid=8092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:41.969629 kernel: audit: type=1103 audit(1768874561.873:990): pid=8092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:41.873000 audit[8092]: CRED_ACQ pid=8092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:42.021497 kernel: audit: type=1006 audit(1768874561.873:991): pid=8092 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 20 02:02:42.021666 kernel: audit: type=1300 audit(1768874561.873:991): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4430c690 a2=3 a3=0 items=0 ppid=1 pid=8092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:41.873000 audit[8092]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4430c690 a2=3 a3=0 items=0 ppid=1 pid=8092 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:42.040910 systemd-logind[1624]: New session 31 of user core. Jan 20 02:02:42.069887 kernel: audit: type=1327 audit(1768874561.873:991): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:41.873000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:42.098289 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 20 02:02:42.135000 audit[8092]: USER_START pid=8092 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:42.208253 kernel: audit: type=1105 audit(1768874562.135:992): pid=8092 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:42.185000 audit[8095]: CRED_ACQ pid=8095 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:42.283546 kernel: audit: type=1103 audit(1768874562.185:993): pid=8095 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:42.543583 kubelet[3041]: E0120 02:02:42.542564 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:02:43.531870 sshd[8095]: Connection closed by 10.0.0.1 port 56440 Jan 20 02:02:43.531598 sshd-session[8092]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:43.559000 audit[8092]: USER_END pid=8092 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:43.607602 systemd[1]: sshd@30-10.0.0.48:22-10.0.0.1:56440.service: Deactivated successfully. Jan 20 02:02:43.560000 audit[8092]: CRED_DISP pid=8092 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:43.642945 systemd[1]: session-31.scope: Deactivated successfully. Jan 20 02:02:43.667886 systemd-logind[1624]: Session 31 logged out. Waiting for processes to exit. Jan 20 02:02:43.687463 systemd-logind[1624]: Removed session 31. Jan 20 02:02:43.718052 kernel: audit: type=1106 audit(1768874563.559:994): pid=8092 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:43.718277 kernel: audit: type=1104 audit(1768874563.560:995): pid=8092 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:43.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.48:22-10.0.0.1:56440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:45.833590 containerd[1641]: time="2026-01-20T02:02:45.833495240Z" level=info msg="container event discarded" container=24505abec5c3e3f3a0ca3277b3fdb59427b61b1fcceaf765003a9aacced32ec0 type=CONTAINER_STOPPED_EVENT Jan 20 02:02:46.527973 kubelet[3041]: E0120 02:02:46.527524 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:46.615891 kubelet[3041]: E0120 02:02:46.587892 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:02:47.542394 kubelet[3041]: E0120 02:02:47.536552 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:02:48.581042 kubelet[3041]: E0120 02:02:48.573881 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:02:48.608393 kubelet[3041]: E0120 02:02:48.601659 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:02:48.667314 systemd[1]: Started sshd@31-10.0.0.48:22-10.0.0.1:50724.service - OpenSSH per-connection server daemon (10.0.0.1:50724). Jan 20 02:02:48.773679 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:48.773814 kernel: audit: type=1130 audit(1768874568.668:997): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.48:22-10.0.0.1:50724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:48.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.48:22-10.0.0.1:50724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:49.046769 containerd[1641]: time="2026-01-20T02:02:49.046536024Z" level=info msg="container event discarded" container=6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6 type=CONTAINER_CREATED_EVENT Jan 20 02:02:49.295000 audit[8108]: USER_ACCT pid=8108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:49.372063 kernel: audit: type=1101 audit(1768874569.295:998): pid=8108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:49.372656 sshd[8108]: Accepted publickey for core from 10.0.0.1 port 50724 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:02:49.400000 audit[8108]: CRED_ACQ pid=8108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:49.409693 sshd-session[8108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:49.474610 kernel: audit: type=1103 audit(1768874569.400:999): pid=8108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:49.501958 systemd-logind[1624]: New session 32 of user core. Jan 20 02:02:49.512759 kernel: audit: type=1006 audit(1768874569.400:1000): pid=8108 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 20 02:02:49.400000 audit[8108]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe49a62990 a2=3 a3=0 items=0 ppid=1 pid=8108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:49.596421 kernel: audit: type=1300 audit(1768874569.400:1000): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe49a62990 a2=3 a3=0 items=0 ppid=1 pid=8108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:49.601371 kubelet[3041]: E0120 02:02:49.598146 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:02:49.400000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:49.608108 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 20 02:02:49.638305 kernel: audit: type=1327 audit(1768874569.400:1000): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:49.690000 audit[8108]: USER_START pid=8108 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:49.779074 kernel: audit: type=1105 audit(1768874569.690:1001): pid=8108 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:49.722000 audit[8111]: CRED_ACQ pid=8111 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:49.835308 kernel: audit: type=1103 audit(1768874569.722:1002): pid=8111 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:50.572734 kubelet[3041]: E0120 02:02:50.572664 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:02:51.092246 sshd[8111]: Connection closed by 10.0.0.1 port 50724 Jan 20 02:02:51.089851 sshd-session[8108]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:51.118000 audit[8108]: USER_END pid=8108 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.142151 systemd-logind[1624]: Session 32 logged out. Waiting for processes to exit. Jan 20 02:02:51.147067 systemd[1]: sshd@31-10.0.0.48:22-10.0.0.1:50724.service: Deactivated successfully. Jan 20 02:02:51.187902 systemd[1]: session-32.scope: Deactivated successfully. Jan 20 02:02:51.230329 kernel: audit: type=1106 audit(1768874571.118:1003): pid=8108 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.235658 systemd-logind[1624]: Removed session 32. Jan 20 02:02:51.118000 audit[8108]: CRED_DISP pid=8108 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:51.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.48:22-10.0.0.1:50724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:51.314313 kernel: audit: type=1104 audit(1768874571.118:1004): pid=8108 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:53.348157 containerd[1641]: time="2026-01-20T02:02:53.343056339Z" level=info msg="container event discarded" container=6d6691992917edb81cf1c7b66ee8427d9cdb86983b3bb4b536632a55c82106e6 type=CONTAINER_STARTED_EVENT Jan 20 02:02:53.638870 kubelet[3041]: E0120 02:02:53.631178 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:02:53.638870 kubelet[3041]: E0120 02:02:53.631850 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:02:56.146691 systemd[1]: Started sshd@32-10.0.0.48:22-10.0.0.1:37744.service - OpenSSH per-connection server daemon (10.0.0.1:37744). Jan 20 02:02:56.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.48:22-10.0.0.1:37744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:56.160760 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:02:56.160881 kernel: audit: type=1130 audit(1768874576.144:1006): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.48:22-10.0.0.1:37744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:56.589000 audit[8131]: USER_ACCT pid=8131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.597426 sshd[8131]: Accepted publickey for core from 10.0.0.1 port 37744 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:02:56.615084 sshd-session[8131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:02:56.601000 audit[8131]: CRED_ACQ pid=8131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.665669 systemd-logind[1624]: New session 33 of user core. Jan 20 02:02:56.688929 kernel: audit: type=1101 audit(1768874576.589:1007): pid=8131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.689195 kernel: audit: type=1103 audit(1768874576.601:1008): pid=8131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.695289 kernel: audit: type=1006 audit(1768874576.601:1009): pid=8131 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jan 20 02:02:56.601000 audit[8131]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdac358d00 a2=3 a3=0 items=0 ppid=1 pid=8131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:56.719051 kernel: audit: type=1300 audit(1768874576.601:1009): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdac358d00 a2=3 a3=0 items=0 ppid=1 pid=8131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:02:56.750760 kernel: audit: type=1327 audit(1768874576.601:1009): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:56.601000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:02:56.752703 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 20 02:02:56.761000 audit[8131]: USER_START pid=8131 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.867105 kernel: audit: type=1105 audit(1768874576.761:1010): pid=8131 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.878452 kernel: audit: type=1103 audit(1768874576.784:1011): pid=8134 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:56.784000 audit[8134]: CRED_ACQ pid=8134 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.578010 kubelet[3041]: E0120 02:02:57.544832 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:02:57.661035 sshd[8134]: Connection closed by 10.0.0.1 port 37744 Jan 20 02:02:57.653756 sshd-session[8131]: pam_unix(sshd:session): session closed for user core Jan 20 02:02:57.699000 audit[8131]: USER_END pid=8131 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.780531 kernel: audit: type=1106 audit(1768874577.699:1012): pid=8131 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.699000 audit[8131]: CRED_DISP pid=8131 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.782876 systemd[1]: sshd@32-10.0.0.48:22-10.0.0.1:37744.service: Deactivated successfully. Jan 20 02:02:57.794602 systemd[1]: session-33.scope: Deactivated successfully. Jan 20 02:02:57.844235 kernel: audit: type=1104 audit(1768874577.699:1013): pid=8131 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:02:57.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.48:22-10.0.0.1:37744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:02:57.829147 systemd-logind[1624]: Session 33 logged out. Waiting for processes to exit. Jan 20 02:02:57.835130 systemd-logind[1624]: Removed session 33. Jan 20 02:02:59.532452 kubelet[3041]: E0120 02:02:59.529139 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:03:00.530320 kubelet[3041]: E0120 02:03:00.530151 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:03:02.682631 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:02.682808 kernel: audit: type=1130 audit(1768874582.675:1015): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.48:22-10.0.0.1:37752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:02.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.48:22-10.0.0.1:37752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:02.676804 systemd[1]: Started sshd@33-10.0.0.48:22-10.0.0.1:37752.service - OpenSSH per-connection server daemon (10.0.0.1:37752). Jan 20 02:03:02.830000 audit[8150]: USER_ACCT pid=8150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:02.851216 sshd-session[8150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:02.871545 kernel: audit: type=1101 audit(1768874582.830:1016): pid=8150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:02.871594 sshd[8150]: Accepted publickey for core from 10.0.0.1 port 37752 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:02.843000 audit[8150]: CRED_ACQ pid=8150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:02.881668 systemd-logind[1624]: New session 34 of user core. Jan 20 02:03:02.918751 kernel: audit: type=1103 audit(1768874582.843:1017): pid=8150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:02.919046 kernel: audit: type=1006 audit(1768874582.843:1018): pid=8150 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jan 20 02:03:02.843000 audit[8150]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc0c847a0 a2=3 a3=0 items=0 ppid=1 pid=8150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:02.943385 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 20 02:03:02.984164 kernel: audit: type=1300 audit(1768874582.843:1018): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc0c847a0 a2=3 a3=0 items=0 ppid=1 pid=8150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:02.984540 kernel: audit: type=1327 audit(1768874582.843:1018): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:02.843000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:02.975000 audit[8150]: USER_START pid=8150 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:03.065610 kernel: audit: type=1105 audit(1768874582.975:1019): pid=8150 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:03.065778 kernel: audit: type=1103 audit(1768874583.031:1020): pid=8153 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:03.031000 audit[8153]: CRED_ACQ pid=8153 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:03.577842 kubelet[3041]: E0120 02:03:03.575155 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:03:03.602524 kubelet[3041]: E0120 02:03:03.589600 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:03:03.948201 sshd[8153]: Connection closed by 10.0.0.1 port 37752 Jan 20 02:03:03.951598 sshd-session[8150]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:03.953000 audit[8150]: USER_END pid=8150 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.001151 kernel: audit: type=1106 audit(1768874583.953:1021): pid=8150 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.053764 kernel: audit: type=1104 audit(1768874583.953:1022): pid=8150 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:03.953000 audit[8150]: CRED_DISP pid=8150 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:04.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.0.0.48:22-10.0.0.1:37752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:04.066640 systemd[1]: sshd@33-10.0.0.48:22-10.0.0.1:37752.service: Deactivated successfully. Jan 20 02:03:04.090057 systemd[1]: session-34.scope: Deactivated successfully. Jan 20 02:03:04.104674 systemd-logind[1624]: Session 34 logged out. Waiting for processes to exit. Jan 20 02:03:04.113052 systemd-logind[1624]: Removed session 34. Jan 20 02:03:06.542995 kubelet[3041]: E0120 02:03:06.542419 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:03:08.536725 kubelet[3041]: E0120 02:03:08.536652 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:03:09.080992 systemd[1]: Started sshd@34-10.0.0.48:22-10.0.0.1:38810.service - OpenSSH per-connection server daemon (10.0.0.1:38810). Jan 20 02:03:09.141202 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:09.141528 kernel: audit: type=1130 audit(1768874589.074:1024): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.48:22-10.0.0.1:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:09.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.48:22-10.0.0.1:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:09.568041 kubelet[3041]: E0120 02:03:09.567992 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:03:09.620219 sshd[8192]: Accepted publickey for core from 10.0.0.1 port 38810 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:09.617000 audit[8192]: USER_ACCT pid=8192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.645884 sshd-session[8192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:09.636000 audit[8192]: CRED_ACQ pid=8192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.685777 kernel: audit: type=1101 audit(1768874589.617:1025): pid=8192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.685927 kernel: audit: type=1103 audit(1768874589.636:1026): pid=8192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.685986 kernel: audit: type=1006 audit(1768874589.640:1027): pid=8192 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jan 20 02:03:09.708987 kernel: audit: type=1300 audit(1768874589.640:1027): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd98c000d0 a2=3 a3=0 items=0 ppid=1 pid=8192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:09.640000 audit[8192]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd98c000d0 a2=3 a3=0 items=0 ppid=1 pid=8192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:09.708491 systemd-logind[1624]: New session 35 of user core. Jan 20 02:03:09.766453 kernel: audit: type=1327 audit(1768874589.640:1027): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:09.640000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:09.786638 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 20 02:03:09.813000 audit[8192]: USER_START pid=8192 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.813000 audit[8195]: CRED_ACQ pid=8195 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.920601 kernel: audit: type=1105 audit(1768874589.813:1028): pid=8192 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:09.920712 kernel: audit: type=1103 audit(1768874589.813:1029): pid=8195 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:10.465537 sshd[8195]: Connection closed by 10.0.0.1 port 38810 Jan 20 02:03:10.466305 sshd-session[8192]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:10.476000 audit[8192]: USER_END pid=8192 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:10.496523 systemd[1]: sshd@34-10.0.0.48:22-10.0.0.1:38810.service: Deactivated successfully. Jan 20 02:03:10.506842 systemd[1]: session-35.scope: Deactivated successfully. Jan 20 02:03:10.516461 systemd-logind[1624]: Session 35 logged out. Waiting for processes to exit. Jan 20 02:03:10.534944 kernel: audit: type=1106 audit(1768874590.476:1030): pid=8192 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:10.535070 kernel: audit: type=1104 audit(1768874590.476:1031): pid=8192 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:10.476000 audit[8192]: CRED_DISP pid=8192 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:10.535542 systemd-logind[1624]: Removed session 35. Jan 20 02:03:10.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-10.0.0.48:22-10.0.0.1:38810 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:12.533855 kubelet[3041]: E0120 02:03:12.527707 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:12.533855 kubelet[3041]: E0120 02:03:12.530239 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:03:13.586523 kubelet[3041]: E0120 02:03:13.582297 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:03:14.528789 kubelet[3041]: E0120 02:03:14.528514 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:03:15.525443 systemd[1]: Started sshd@35-10.0.0.48:22-10.0.0.1:37720.service - OpenSSH per-connection server daemon (10.0.0.1:37720). Jan 20 02:03:15.572725 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:15.572873 kernel: audit: type=1130 audit(1768874595.521:1033): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.48:22-10.0.0.1:37720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:15.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.48:22-10.0.0.1:37720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:15.572998 kubelet[3041]: E0120 02:03:15.552244 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:03:15.808000 audit[8208]: USER_ACCT pid=8208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:15.817712 sshd[8208]: Accepted publickey for core from 10.0.0.1 port 37720 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:15.832682 sshd-session[8208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:15.821000 audit[8208]: CRED_ACQ pid=8208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:15.872136 systemd-logind[1624]: New session 36 of user core. Jan 20 02:03:15.916812 kernel: audit: type=1101 audit(1768874595.808:1034): pid=8208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:15.916934 kernel: audit: type=1103 audit(1768874595.821:1035): pid=8208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:15.963820 kernel: audit: type=1006 audit(1768874595.821:1036): pid=8208 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jan 20 02:03:15.963967 kernel: audit: type=1300 audit(1768874595.821:1036): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc2c1f1c0 a2=3 a3=0 items=0 ppid=1 pid=8208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:15.821000 audit[8208]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc2c1f1c0 a2=3 a3=0 items=0 ppid=1 pid=8208 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:16.046630 kernel: audit: type=1327 audit(1768874595.821:1036): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:15.821000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:16.050296 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 20 02:03:16.111000 audit[8208]: USER_START pid=8208 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.205580 kernel: audit: type=1105 audit(1768874596.111:1037): pid=8208 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.205740 kernel: audit: type=1103 audit(1768874596.131:1038): pid=8211 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.131000 audit[8211]: CRED_ACQ pid=8211 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.919653 sshd[8211]: Connection closed by 10.0.0.1 port 37720 Jan 20 02:03:16.927938 sshd-session[8208]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:16.933000 audit[8208]: USER_END pid=8208 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.954929 systemd[1]: sshd@35-10.0.0.48:22-10.0.0.1:37720.service: Deactivated successfully. Jan 20 02:03:16.987549 kernel: audit: type=1106 audit(1768874596.933:1039): pid=8208 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.933000 audit[8208]: CRED_DISP pid=8208 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:16.998067 systemd[1]: session-36.scope: Deactivated successfully. Jan 20 02:03:17.001286 systemd-logind[1624]: Session 36 logged out. Waiting for processes to exit. Jan 20 02:03:16.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-10.0.0.48:22-10.0.0.1:37720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:17.029922 kernel: audit: type=1104 audit(1768874596.933:1040): pid=8208 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:17.036093 systemd-logind[1624]: Removed session 36. Jan 20 02:03:20.535801 kubelet[3041]: E0120 02:03:20.531109 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:03:20.548038 kubelet[3041]: E0120 02:03:20.547948 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:03:21.593655 kubelet[3041]: E0120 02:03:21.589831 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:03:22.037026 systemd[1]: Started sshd@36-10.0.0.48:22-10.0.0.1:37722.service - OpenSSH per-connection server daemon (10.0.0.1:37722). Jan 20 02:03:22.145757 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:22.145861 kernel: audit: type=1130 audit(1768874602.027:1042): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.48:22-10.0.0.1:37722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:22.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.48:22-10.0.0.1:37722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:22.511000 audit[8227]: USER_ACCT pid=8227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.522590 sshd[8227]: Accepted publickey for core from 10.0.0.1 port 37722 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:22.526131 sshd-session[8227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:22.580652 kernel: audit: type=1101 audit(1768874602.511:1043): pid=8227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.519000 audit[8227]: CRED_ACQ pid=8227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.634269 systemd-logind[1624]: New session 37 of user core. Jan 20 02:03:22.675548 kernel: audit: type=1103 audit(1768874602.519:1044): pid=8227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.675715 kernel: audit: type=1006 audit(1768874602.519:1045): pid=8227 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jan 20 02:03:22.519000 audit[8227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef7d807c0 a2=3 a3=0 items=0 ppid=1 pid=8227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:22.711812 kernel: audit: type=1300 audit(1768874602.519:1045): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef7d807c0 a2=3 a3=0 items=0 ppid=1 pid=8227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:22.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:22.729032 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 20 02:03:22.783861 kernel: audit: type=1327 audit(1768874602.519:1045): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:22.782000 audit[8227]: USER_START pid=8227 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.797000 audit[8230]: CRED_ACQ pid=8230 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.924200 kernel: audit: type=1105 audit(1768874602.782:1046): pid=8227 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:22.924438 kernel: audit: type=1103 audit(1768874602.797:1047): pid=8230 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:24.126650 containerd[1641]: time="2026-01-20T02:03:24.122753079Z" level=info msg="container event discarded" container=7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda type=CONTAINER_CREATED_EVENT Jan 20 02:03:24.175141 sshd[8230]: Connection closed by 10.0.0.1 port 37722 Jan 20 02:03:24.183426 sshd-session[8227]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:24.196000 audit[8227]: USER_END pid=8227 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:24.222212 systemd[1]: sshd@36-10.0.0.48:22-10.0.0.1:37722.service: Deactivated successfully. Jan 20 02:03:24.233264 systemd[1]: session-37.scope: Deactivated successfully. Jan 20 02:03:24.243831 kernel: audit: type=1106 audit(1768874604.196:1048): pid=8227 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:24.241531 systemd-logind[1624]: Session 37 logged out. Waiting for processes to exit. Jan 20 02:03:24.196000 audit[8227]: CRED_DISP pid=8227 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:24.272963 systemd-logind[1624]: Removed session 37. Jan 20 02:03:24.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-10.0.0.48:22-10.0.0.1:37722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:24.302769 kernel: audit: type=1104 audit(1768874604.196:1049): pid=8227 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:25.606623 kubelet[3041]: E0120 02:03:25.606311 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:03:26.530830 kubelet[3041]: E0120 02:03:26.527126 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:26.546386 kubelet[3041]: E0120 02:03:26.542647 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:03:26.565793 kubelet[3041]: E0120 02:03:26.565332 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:03:27.543409 kubelet[3041]: E0120 02:03:27.540797 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:03:29.098561 containerd[1641]: time="2026-01-20T02:03:29.098300070Z" level=info msg="container event discarded" container=7e2d6bcb51af0312fd8bb65b42bd133d5b300b7d9c522177cac877d3c85c7dda type=CONTAINER_STARTED_EVENT Jan 20 02:03:29.256610 systemd[1]: Started sshd@37-10.0.0.48:22-10.0.0.1:45786.service - OpenSSH per-connection server daemon (10.0.0.1:45786). Jan 20 02:03:29.331131 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:29.331318 kernel: audit: type=1130 audit(1768874609.252:1051): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.48:22-10.0.0.1:45786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:29.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.48:22-10.0.0.1:45786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:29.858521 kernel: audit: type=1101 audit(1768874609.790:1052): pid=8247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.790000 audit[8247]: USER_ACCT pid=8247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.810286 sshd-session[8247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:29.863420 sshd[8247]: Accepted publickey for core from 10.0.0.1 port 45786 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:29.808000 audit[8247]: CRED_ACQ pid=8247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.869617 systemd-logind[1624]: New session 38 of user core. Jan 20 02:03:29.928724 kernel: audit: type=1103 audit(1768874609.808:1053): pid=8247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:29.928918 kernel: audit: type=1006 audit(1768874609.808:1054): pid=8247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jan 20 02:03:29.808000 audit[8247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdba95a40 a2=3 a3=0 items=0 ppid=1 pid=8247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:30.013602 kernel: audit: type=1300 audit(1768874609.808:1054): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdba95a40 a2=3 a3=0 items=0 ppid=1 pid=8247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:30.020208 kernel: audit: type=1327 audit(1768874609.808:1054): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:29.808000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:30.030921 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 20 02:03:30.072000 audit[8247]: USER_START pid=8247 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:30.090000 audit[8250]: CRED_ACQ pid=8250 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:30.146714 kernel: audit: type=1105 audit(1768874610.072:1055): pid=8247 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:30.146885 kernel: audit: type=1103 audit(1768874610.090:1056): pid=8250 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.079031 sshd[8250]: Connection closed by 10.0.0.1 port 45786 Jan 20 02:03:31.069898 sshd-session[8247]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:31.087000 audit[8247]: USER_END pid=8247 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.116805 systemd-logind[1624]: Session 38 logged out. Waiting for processes to exit. Jan 20 02:03:31.124159 systemd[1]: sshd@37-10.0.0.48:22-10.0.0.1:45786.service: Deactivated successfully. Jan 20 02:03:31.151235 systemd[1]: session-38.scope: Deactivated successfully. Jan 20 02:03:31.192539 kernel: audit: type=1106 audit(1768874611.087:1057): pid=8247 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.200861 kernel: audit: type=1104 audit(1768874611.090:1058): pid=8247 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.090000 audit[8247]: CRED_DISP pid=8247 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:31.186189 systemd-logind[1624]: Removed session 38. Jan 20 02:03:31.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-10.0.0.48:22-10.0.0.1:45786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:31.542324 kubelet[3041]: E0120 02:03:31.541800 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:03:32.550455 kubelet[3041]: E0120 02:03:32.546443 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:33.574974 kubelet[3041]: E0120 02:03:33.574058 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:03:35.548791 kubelet[3041]: E0120 02:03:35.545929 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:36.155197 systemd[1]: Started sshd@38-10.0.0.48:22-10.0.0.1:57716.service - OpenSSH per-connection server daemon (10.0.0.1:57716). Jan 20 02:03:36.195186 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:36.195253 kernel: audit: type=1130 audit(1768874616.154:1060): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.48:22-10.0.0.1:57716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:36.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.48:22-10.0.0.1:57716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:36.535000 audit[8289]: USER_ACCT pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:36.577850 sshd-session[8289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:36.582812 kubelet[3041]: E0120 02:03:36.549937 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:03:36.592315 sshd[8289]: Accepted publickey for core from 10.0.0.1 port 57716 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:36.612420 kernel: audit: type=1101 audit(1768874616.535:1061): pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:36.557000 audit[8289]: CRED_ACQ pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:36.697860 kernel: audit: type=1103 audit(1768874616.557:1062): pid=8289 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:36.734220 systemd-logind[1624]: New session 39 of user core. Jan 20 02:03:36.742035 kernel: audit: type=1006 audit(1768874616.557:1063): pid=8289 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Jan 20 02:03:36.557000 audit[8289]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb53a3e20 a2=3 a3=0 items=0 ppid=1 pid=8289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:36.825530 kernel: audit: type=1300 audit(1768874616.557:1063): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb53a3e20 a2=3 a3=0 items=0 ppid=1 pid=8289 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:36.557000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:36.868862 kernel: audit: type=1327 audit(1768874616.557:1063): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:36.864604 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 20 02:03:36.916000 audit[8289]: USER_START pid=8289 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.016000 audit[8292]: CRED_ACQ pid=8292 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.094240 kernel: audit: type=1105 audit(1768874616.916:1064): pid=8289 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.094447 kernel: audit: type=1103 audit(1768874617.016:1065): pid=8292 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.611761 kubelet[3041]: E0120 02:03:37.605471 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:03:37.640381 kubelet[3041]: E0120 02:03:37.619252 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:03:37.909915 sshd[8292]: Connection closed by 10.0.0.1 port 57716 Jan 20 02:03:37.918026 sshd-session[8289]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:37.937000 audit[8289]: USER_END pid=8289 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:37.965961 systemd-logind[1624]: Session 39 logged out. Waiting for processes to exit. Jan 20 02:03:37.970214 systemd[1]: sshd@38-10.0.0.48:22-10.0.0.1:57716.service: Deactivated successfully. Jan 20 02:03:38.008446 systemd[1]: session-39.scope: Deactivated successfully. Jan 20 02:03:37.937000 audit[8289]: CRED_DISP pid=8289 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:38.022513 kernel: audit: type=1106 audit(1768874617.937:1066): pid=8289 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:38.026031 kernel: audit: type=1104 audit(1768874617.937:1067): pid=8289 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:38.040002 systemd-logind[1624]: Removed session 39. Jan 20 02:03:37.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-10.0.0.48:22-10.0.0.1:57716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:39.551833 kubelet[3041]: E0120 02:03:39.550871 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:03:39.565777 kubelet[3041]: E0120 02:03:39.565682 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:03:42.970787 systemd[1]: Started sshd@39-10.0.0.48:22-10.0.0.1:57720.service - OpenSSH per-connection server daemon (10.0.0.1:57720). Jan 20 02:03:42.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.48:22-10.0.0.1:57720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:42.981630 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:42.981763 kernel: audit: type=1130 audit(1768874622.970:1069): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.48:22-10.0.0.1:57720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:43.258442 sshd[8308]: Accepted publickey for core from 10.0.0.1 port 57720 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:43.255000 audit[8308]: USER_ACCT pid=8308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:43.267802 sshd-session[8308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:43.331810 kernel: audit: type=1101 audit(1768874623.255:1070): pid=8308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:43.265000 audit[8308]: CRED_ACQ pid=8308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:43.398962 systemd-logind[1624]: New session 40 of user core. Jan 20 02:03:43.426846 kernel: audit: type=1103 audit(1768874623.265:1071): pid=8308 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:43.427013 kernel: audit: type=1006 audit(1768874623.265:1072): pid=8308 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=40 res=1 Jan 20 02:03:43.265000 audit[8308]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0c9d1f50 a2=3 a3=0 items=0 ppid=1 pid=8308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:43.535106 kernel: audit: type=1300 audit(1768874623.265:1072): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0c9d1f50 a2=3 a3=0 items=0 ppid=1 pid=8308 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:43.570882 kernel: audit: type=1327 audit(1768874623.265:1072): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:43.265000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:43.549222 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 20 02:03:43.631000 audit[8308]: USER_START pid=8308 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:43.648000 audit[8312]: CRED_ACQ pid=8312 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:43.774852 kernel: audit: type=1105 audit(1768874623.631:1073): pid=8308 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:43.775019 kernel: audit: type=1103 audit(1768874623.648:1074): pid=8312 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.371478 sshd[8312]: Connection closed by 10.0.0.1 port 57720 Jan 20 02:03:44.371729 sshd-session[8308]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:44.445915 kernel: audit: type=1106 audit(1768874624.394:1075): pid=8308 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.394000 audit[8308]: USER_END pid=8308 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.437274 systemd[1]: sshd@39-10.0.0.48:22-10.0.0.1:57720.service: Deactivated successfully. Jan 20 02:03:44.440475 systemd[1]: session-40.scope: Deactivated successfully. Jan 20 02:03:44.395000 audit[8308]: CRED_DISP pid=8308 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.480085 systemd-logind[1624]: Session 40 logged out. Waiting for processes to exit. Jan 20 02:03:44.492005 systemd-logind[1624]: Removed session 40. Jan 20 02:03:44.510918 kernel: audit: type=1104 audit(1768874624.395:1076): pid=8308 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:44.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-10.0.0.48:22-10.0.0.1:57720 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:45.547120 kubelet[3041]: E0120 02:03:45.531806 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:03:46.534868 kubelet[3041]: E0120 02:03:46.533649 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:48.531393 kubelet[3041]: E0120 02:03:48.531049 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:03:48.563436 kubelet[3041]: E0120 02:03:48.548199 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:03:49.513974 systemd[1]: Started sshd@40-10.0.0.48:22-10.0.0.1:56988.service - OpenSSH per-connection server daemon (10.0.0.1:56988). Jan 20 02:03:49.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.48:22-10.0.0.1:56988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:49.529938 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:49.532394 kernel: audit: type=1130 audit(1768874629.511:1078): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.48:22-10.0.0.1:56988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:49.611468 kubelet[3041]: E0120 02:03:49.606966 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:03:49.952000 audit[8339]: USER_ACCT pid=8339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:49.965187 sshd[8339]: Accepted publickey for core from 10.0.0.1 port 56988 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:50.009179 kernel: audit: type=1101 audit(1768874629.952:1079): pid=8339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:50.009312 kernel: audit: type=1103 audit(1768874629.992:1080): pid=8339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:49.992000 audit[8339]: CRED_ACQ pid=8339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:50.022430 kernel: audit: type=1006 audit(1768874629.992:1081): pid=8339 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jan 20 02:03:50.013430 sshd-session[8339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:49.992000 audit[8339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde980cde0 a2=3 a3=0 items=0 ppid=1 pid=8339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:50.047539 kernel: audit: type=1300 audit(1768874629.992:1081): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde980cde0 a2=3 a3=0 items=0 ppid=1 pid=8339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:49.992000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:50.065869 kernel: audit: type=1327 audit(1768874629.992:1081): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:50.105156 systemd-logind[1624]: New session 41 of user core. Jan 20 02:03:50.148571 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 20 02:03:50.199000 audit[8339]: USER_START pid=8339 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:50.246004 kernel: audit: type=1105 audit(1768874630.199:1082): pid=8339 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:50.248000 audit[8342]: CRED_ACQ pid=8342 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:50.283390 kernel: audit: type=1103 audit(1768874630.248:1083): pid=8342 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:51.275449 sshd[8342]: Connection closed by 10.0.0.1 port 56988 Jan 20 02:03:51.287126 sshd-session[8339]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:51.291000 audit[8339]: USER_END pid=8339 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:51.306913 systemd-logind[1624]: Session 41 logged out. Waiting for processes to exit. Jan 20 02:03:51.314937 systemd[1]: sshd@40-10.0.0.48:22-10.0.0.1:56988.service: Deactivated successfully. Jan 20 02:03:51.319545 systemd[1]: session-41.scope: Deactivated successfully. Jan 20 02:03:51.326719 systemd-logind[1624]: Removed session 41. Jan 20 02:03:51.291000 audit[8339]: CRED_DISP pid=8339 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:51.405410 kernel: audit: type=1106 audit(1768874631.291:1084): pid=8339 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:51.405570 kernel: audit: type=1104 audit(1768874631.291:1085): pid=8339 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:51.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-10.0.0.48:22-10.0.0.1:56988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:51.994038 containerd[1641]: time="2026-01-20T02:03:51.993916365Z" level=info msg="container event discarded" container=79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204 type=CONTAINER_CREATED_EVENT Jan 20 02:03:51.994038 containerd[1641]: time="2026-01-20T02:03:51.993995753Z" level=info msg="container event discarded" container=79cc7e0470938db0203ff49e55a0b7fb0ddd387a0249fb99868daf7d2e762204 type=CONTAINER_STARTED_EVENT Jan 20 02:03:52.531106 kubelet[3041]: E0120 02:03:52.530590 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:03:53.544910 kubelet[3041]: E0120 02:03:53.543993 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:03:53.622072 containerd[1641]: time="2026-01-20T02:03:53.621990023Z" level=info msg="container event discarded" container=7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a type=CONTAINER_CREATED_EVENT Jan 20 02:03:53.622915 containerd[1641]: time="2026-01-20T02:03:53.622879225Z" level=info msg="container event discarded" container=7b7d0fafc35ed6c01d869e43021c676146eeacf0be708144e23c0f3d5393711a type=CONTAINER_STARTED_EVENT Jan 20 02:03:53.915768 containerd[1641]: time="2026-01-20T02:03:53.914912154Z" level=info msg="container event discarded" container=5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70 type=CONTAINER_CREATED_EVENT Jan 20 02:03:53.923585 containerd[1641]: time="2026-01-20T02:03:53.923439606Z" level=info msg="container event discarded" container=5c9e47fac85cf8634d2cfb7d0f03e02ff1c6c49480cac3815a47696635404d70 type=CONTAINER_STARTED_EVENT Jan 20 02:03:54.404875 containerd[1641]: time="2026-01-20T02:03:54.397963704Z" level=info msg="container event discarded" container=d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a type=CONTAINER_CREATED_EVENT Jan 20 02:03:54.546077 kubelet[3041]: E0120 02:03:54.544845 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:03:55.944213 containerd[1641]: time="2026-01-20T02:03:55.944056823Z" level=info msg="container event discarded" container=2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03 type=CONTAINER_CREATED_EVENT Jan 20 02:03:55.944213 containerd[1641]: time="2026-01-20T02:03:55.944146511Z" level=info msg="container event discarded" container=2e6c79343c269eb3b5bb3e7d7d983247d4ecc087807ffb0a13f055b1b4f26b03 type=CONTAINER_STARTED_EVENT Jan 20 02:03:56.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.48:22-10.0.0.1:55360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:56.366110 systemd[1]: Started sshd@41-10.0.0.48:22-10.0.0.1:55360.service - OpenSSH per-connection server daemon (10.0.0.1:55360). Jan 20 02:03:56.443882 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:03:56.444059 kernel: audit: type=1130 audit(1768874636.370:1087): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.48:22-10.0.0.1:55360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:56.529427 kubelet[3041]: E0120 02:03:56.527708 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:03:56.800000 audit[8373]: USER_ACCT pid=8373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:56.807806 sshd[8373]: Accepted publickey for core from 10.0.0.1 port 55360 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:03:56.818549 sshd-session[8373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:03:56.888640 kernel: audit: type=1101 audit(1768874636.800:1088): pid=8373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:56.888792 kernel: audit: type=1103 audit(1768874636.812:1089): pid=8373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:56.812000 audit[8373]: CRED_ACQ pid=8373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:56.948454 kernel: audit: type=1006 audit(1768874636.812:1090): pid=8373 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jan 20 02:03:56.948595 kernel: audit: type=1300 audit(1768874636.812:1090): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7880c5d0 a2=3 a3=0 items=0 ppid=1 pid=8373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:56.812000 audit[8373]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7880c5d0 a2=3 a3=0 items=0 ppid=1 pid=8373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:03:56.976264 systemd-logind[1624]: New session 42 of user core. Jan 20 02:03:56.812000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:57.036235 kernel: audit: type=1327 audit(1768874636.812:1090): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:03:57.040329 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 20 02:03:57.087000 audit[8373]: USER_START pid=8373 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:57.171631 kernel: audit: type=1105 audit(1768874637.087:1091): pid=8373 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:57.113000 audit[8383]: CRED_ACQ pid=8383 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:57.219496 kernel: audit: type=1103 audit(1768874637.113:1092): pid=8383 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:58.129792 sshd[8383]: Connection closed by 10.0.0.1 port 55360 Jan 20 02:03:58.130922 sshd-session[8373]: pam_unix(sshd:session): session closed for user core Jan 20 02:03:58.140000 audit[8373]: USER_END pid=8373 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:58.188446 systemd[1]: sshd@41-10.0.0.48:22-10.0.0.1:55360.service: Deactivated successfully. Jan 20 02:03:58.189668 kernel: audit: type=1106 audit(1768874638.140:1093): pid=8373 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:58.149000 audit[8373]: CRED_DISP pid=8373 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:58.200556 systemd[1]: session-42.scope: Deactivated successfully. Jan 20 02:03:58.202093 systemd-logind[1624]: Session 42 logged out. Waiting for processes to exit. Jan 20 02:03:58.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-10.0.0.48:22-10.0.0.1:55360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:03:58.216465 kernel: audit: type=1104 audit(1768874638.149:1094): pid=8373 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:03:58.237447 systemd-logind[1624]: Removed session 42. Jan 20 02:03:59.039272 containerd[1641]: time="2026-01-20T02:03:59.039167791Z" level=info msg="container event discarded" container=d6c1d468cca201bd5e370230f29385a8e0716369ce74d8b8d3e6fc8e3ecf922a type=CONTAINER_STARTED_EVENT Jan 20 02:03:59.186110 containerd[1641]: time="2026-01-20T02:03:59.185984812Z" level=info msg="container event discarded" container=93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2 type=CONTAINER_CREATED_EVENT Jan 20 02:03:59.186110 containerd[1641]: time="2026-01-20T02:03:59.186039263Z" level=info msg="container event discarded" container=93cbfef978dfeae5688212fbadcea1a7c3571896bca4fd878308f1679524c2a2 type=CONTAINER_STARTED_EVENT Jan 20 02:04:00.549066 kubelet[3041]: E0120 02:04:00.536707 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:04:00.549066 kubelet[3041]: E0120 02:04:00.536882 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:04:01.535826 kubelet[3041]: E0120 02:04:01.534279 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:03.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.48:22-10.0.0.1:55372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:03.235195 systemd[1]: Started sshd@42-10.0.0.48:22-10.0.0.1:55372.service - OpenSSH per-connection server daemon (10.0.0.1:55372). Jan 20 02:04:03.247108 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:03.247161 kernel: audit: type=1130 audit(1768874643.234:1096): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.48:22-10.0.0.1:55372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:03.566568 kubelet[3041]: E0120 02:04:03.561616 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:04:03.596718 sshd[8396]: Accepted publickey for core from 10.0.0.1 port 55372 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:03.593000 audit[8396]: USER_ACCT pid=8396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.615865 sshd-session[8396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:03.606000 audit[8396]: CRED_ACQ pid=8396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.687070 systemd-logind[1624]: New session 43 of user core. Jan 20 02:04:03.710867 kernel: audit: type=1101 audit(1768874643.593:1097): pid=8396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.713225 kernel: audit: type=1103 audit(1768874643.606:1098): pid=8396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.713271 kernel: audit: type=1006 audit(1768874643.612:1099): pid=8396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jan 20 02:04:03.734149 kernel: audit: type=1300 audit(1768874643.612:1099): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5d23e290 a2=3 a3=0 items=0 ppid=1 pid=8396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:03.612000 audit[8396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5d23e290 a2=3 a3=0 items=0 ppid=1 pid=8396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:03.802516 kernel: audit: type=1327 audit(1768874643.612:1099): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:03.612000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:03.859425 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 20 02:04:03.901000 audit[8396]: USER_START pid=8396 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:04.000432 kernel: audit: type=1105 audit(1768874643.901:1100): pid=8396 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:04.000584 kernel: audit: type=1103 audit(1768874643.920:1101): pid=8409 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:03.920000 audit[8409]: CRED_ACQ pid=8409 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:04.427133 containerd[1641]: time="2026-01-20T02:04:04.426757996Z" level=info msg="container event discarded" container=e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08 type=CONTAINER_CREATED_EVENT Jan 20 02:04:04.427133 containerd[1641]: time="2026-01-20T02:04:04.427016430Z" level=info msg="container event discarded" container=e74bad0c0de981556ef8bd2d8af83cddcc6ba91ab13c753df55bad55bdb00b08 type=CONTAINER_STARTED_EVENT Jan 20 02:04:04.547034 kubelet[3041]: E0120 02:04:04.544082 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:04:04.649044 sshd[8409]: Connection closed by 10.0.0.1 port 55372 Jan 20 02:04:04.637658 sshd-session[8396]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:04.658000 audit[8396]: USER_END pid=8396 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:04.691740 systemd[1]: sshd@42-10.0.0.48:22-10.0.0.1:55372.service: Deactivated successfully. Jan 20 02:04:04.699174 systemd[1]: session-43.scope: Deactivated successfully. Jan 20 02:04:04.721944 systemd-logind[1624]: Session 43 logged out. Waiting for processes to exit. Jan 20 02:04:04.726157 kernel: audit: type=1106 audit(1768874644.658:1102): pid=8396 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:04.668000 audit[8396]: CRED_DISP pid=8396 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:04.734672 systemd-logind[1624]: Removed session 43. Jan 20 02:04:04.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-10.0.0.48:22-10.0.0.1:55372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:04.778456 kernel: audit: type=1104 audit(1768874644.668:1103): pid=8396 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:05.042509 containerd[1641]: time="2026-01-20T02:04:05.042407768Z" level=info msg="container event discarded" container=90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b type=CONTAINER_CREATED_EVENT Jan 20 02:04:05.042509 containerd[1641]: time="2026-01-20T02:04:05.042471897Z" level=info msg="container event discarded" container=90b279bac82ad33d7146643d92bb281d0b61787dac81a38361d352261c678b8b type=CONTAINER_STARTED_EVENT Jan 20 02:04:05.559214 kubelet[3041]: E0120 02:04:05.547553 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:04:05.559214 kubelet[3041]: E0120 02:04:05.556268 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:04:06.250431 containerd[1641]: time="2026-01-20T02:04:06.250193547Z" level=info msg="container event discarded" container=057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e type=CONTAINER_CREATED_EVENT Jan 20 02:04:06.250431 containerd[1641]: time="2026-01-20T02:04:06.250283414Z" level=info msg="container event discarded" container=057fd45f44ed4cb72ac0ac661cbc80d0f1c6351f98018307c83afa0e090db58e type=CONTAINER_STARTED_EVENT Jan 20 02:04:06.488468 containerd[1641]: time="2026-01-20T02:04:06.481896898Z" level=info msg="container event discarded" container=3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86 type=CONTAINER_CREATED_EVENT Jan 20 02:04:06.694805 containerd[1641]: time="2026-01-20T02:04:06.694298791Z" level=info msg="container event discarded" container=93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0 type=CONTAINER_CREATED_EVENT Jan 20 02:04:06.694805 containerd[1641]: time="2026-01-20T02:04:06.694415870Z" level=info msg="container event discarded" container=93965c766651bb6da6158fb1b81d33505fd5f25b4553e46bb72c370913538cd0 type=CONTAINER_STARTED_EVENT Jan 20 02:04:08.930076 containerd[1641]: time="2026-01-20T02:04:08.925742710Z" level=info msg="container event discarded" container=3cf302fbabda5f7d1880e1ffa2c502c1238285066db4fabd8e43e0c5ad6f9f86 type=CONTAINER_STARTED_EVENT Jan 20 02:04:09.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.48:22-10.0.0.1:37422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:09.718582 systemd[1]: Started sshd@43-10.0.0.48:22-10.0.0.1:37422.service - OpenSSH per-connection server daemon (10.0.0.1:37422). Jan 20 02:04:09.775143 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:09.775303 kernel: audit: type=1130 audit(1768874649.713:1105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.48:22-10.0.0.1:37422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:10.157000 audit[8439]: USER_ACCT pid=8439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.176280 sshd-session[8439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:10.215680 sshd[8439]: Accepted publickey for core from 10.0.0.1 port 37422 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:10.220270 kernel: audit: type=1101 audit(1768874650.157:1106): pid=8439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.173000 audit[8439]: CRED_ACQ pid=8439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.296022 systemd-logind[1624]: New session 44 of user core. Jan 20 02:04:10.329459 kernel: audit: type=1103 audit(1768874650.173:1107): pid=8439 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.329623 kernel: audit: type=1006 audit(1768874650.173:1108): pid=8439 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jan 20 02:04:10.329679 kernel: audit: type=1300 audit(1768874650.173:1108): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9144e0a0 a2=3 a3=0 items=0 ppid=1 pid=8439 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:10.173000 audit[8439]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9144e0a0 a2=3 a3=0 items=0 ppid=1 pid=8439 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:10.382412 kernel: audit: type=1327 audit(1768874650.173:1108): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:10.173000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:10.405194 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 20 02:04:10.429000 audit[8439]: USER_START pid=8439 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.482622 kernel: audit: type=1105 audit(1768874650.429:1109): pid=8439 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.482825 kernel: audit: type=1103 audit(1768874650.444:1110): pid=8442 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.444000 audit[8442]: CRED_ACQ pid=8442 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:10.547243 kubelet[3041]: E0120 02:04:10.547146 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:10.603277 kubelet[3041]: E0120 02:04:10.574914 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:04:11.377049 sshd[8442]: Connection closed by 10.0.0.1 port 37422 Jan 20 02:04:11.382403 sshd-session[8439]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:11.410000 audit[8439]: USER_END pid=8439 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.437835 systemd-logind[1624]: Session 44 logged out. Waiting for processes to exit. Jan 20 02:04:11.442890 systemd[1]: sshd@43-10.0.0.48:22-10.0.0.1:37422.service: Deactivated successfully. Jan 20 02:04:11.480883 systemd[1]: session-44.scope: Deactivated successfully. Jan 20 02:04:11.497187 kernel: audit: type=1106 audit(1768874651.410:1111): pid=8439 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.497325 kernel: audit: type=1104 audit(1768874651.410:1112): pid=8439 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.410000 audit[8439]: CRED_DISP pid=8439 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:11.516426 systemd-logind[1624]: Removed session 44. Jan 20 02:04:11.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-10.0.0.48:22-10.0.0.1:37422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:12.544182 kubelet[3041]: E0120 02:04:12.542622 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:04:15.548794 kubelet[3041]: E0120 02:04:15.548489 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:04:16.439969 systemd[1]: Started sshd@44-10.0.0.48:22-10.0.0.1:54472.service - OpenSSH per-connection server daemon (10.0.0.1:54472). Jan 20 02:04:16.481569 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:16.481676 kernel: audit: type=1130 audit(1768874656.447:1114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.48:22-10.0.0.1:54472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:16.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.48:22-10.0.0.1:54472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:16.854000 audit[8455]: USER_ACCT pid=8455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:16.862439 sshd[8455]: Accepted publickey for core from 10.0.0.1 port 54472 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:16.912576 kernel: audit: type=1101 audit(1768874656.854:1115): pid=8455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:16.917000 audit[8455]: CRED_ACQ pid=8455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:16.920837 sshd-session[8455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:16.965804 kernel: audit: type=1103 audit(1768874656.917:1116): pid=8455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:16.972555 systemd-logind[1624]: New session 45 of user core. Jan 20 02:04:16.917000 audit[8455]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc8ad21c0 a2=3 a3=0 items=0 ppid=1 pid=8455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:17.040127 kernel: audit: type=1006 audit(1768874656.917:1117): pid=8455 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Jan 20 02:04:17.040398 kernel: audit: type=1300 audit(1768874656.917:1117): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc8ad21c0 a2=3 a3=0 items=0 ppid=1 pid=8455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:17.041752 kernel: audit: type=1327 audit(1768874656.917:1117): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:16.917000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:17.091191 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 20 02:04:17.116000 audit[8455]: USER_START pid=8455 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:17.172169 kernel: audit: type=1105 audit(1768874657.116:1118): pid=8455 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:17.138000 audit[8458]: CRED_ACQ pid=8458 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:17.203679 kernel: audit: type=1103 audit(1768874657.138:1119): pid=8458 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.176182 sshd[8458]: Connection closed by 10.0.0.1 port 54472 Jan 20 02:04:18.180531 sshd-session[8455]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:18.187000 audit[8455]: USER_END pid=8455 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.264844 kernel: audit: type=1106 audit(1768874658.187:1120): pid=8455 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.214675 systemd[1]: sshd@44-10.0.0.48:22-10.0.0.1:54472.service: Deactivated successfully. Jan 20 02:04:18.234173 systemd[1]: session-45.scope: Deactivated successfully. Jan 20 02:04:18.255603 systemd-logind[1624]: Session 45 logged out. Waiting for processes to exit. Jan 20 02:04:18.189000 audit[8455]: CRED_DISP pid=8455 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.265865 systemd-logind[1624]: Removed session 45. Jan 20 02:04:18.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-10.0.0.48:22-10.0.0.1:54472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:18.318301 kernel: audit: type=1104 audit(1768874658.189:1121): pid=8455 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:18.570845 kubelet[3041]: E0120 02:04:18.546917 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:04:18.570845 kubelet[3041]: E0120 02:04:18.550671 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:04:19.606828 kubelet[3041]: E0120 02:04:19.598686 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:04:19.637559 kubelet[3041]: E0120 02:04:19.637255 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:04:21.576428 kubelet[3041]: E0120 02:04:21.576032 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:23.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.48:22-10.0.0.1:54486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:23.247782 systemd[1]: Started sshd@45-10.0.0.48:22-10.0.0.1:54486.service - OpenSSH per-connection server daemon (10.0.0.1:54486). Jan 20 02:04:23.273292 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:23.273417 kernel: audit: type=1130 audit(1768874663.246:1123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.48:22-10.0.0.1:54486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:23.556311 containerd[1641]: time="2026-01-20T02:04:23.552681199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 02:04:23.573984 sshd[8471]: Accepted publickey for core from 10.0.0.1 port 54486 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:23.571000 audit[8471]: USER_ACCT pid=8471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.591616 sshd-session[8471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:23.618454 kernel: audit: type=1101 audit(1768874663.571:1124): pid=8471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.618617 kernel: audit: type=1103 audit(1768874663.589:1125): pid=8471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.589000 audit[8471]: CRED_ACQ pid=8471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.627250 systemd-logind[1624]: New session 46 of user core. Jan 20 02:04:23.663225 kernel: audit: type=1006 audit(1768874663.589:1126): pid=8471 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Jan 20 02:04:23.711234 kernel: audit: type=1300 audit(1768874663.589:1126): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef3107180 a2=3 a3=0 items=0 ppid=1 pid=8471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:23.589000 audit[8471]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef3107180 a2=3 a3=0 items=0 ppid=1 pid=8471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:23.764318 containerd[1641]: time="2026-01-20T02:04:23.758779557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:23.589000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:23.777930 kernel: audit: type=1327 audit(1768874663.589:1126): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:23.787419 containerd[1641]: time="2026-01-20T02:04:23.787188904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 02:04:23.788283 containerd[1641]: time="2026-01-20T02:04:23.788063871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:23.788949 kubelet[3041]: E0120 02:04:23.788867 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:04:23.793457 kubelet[3041]: E0120 02:04:23.792555 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 02:04:23.793457 kubelet[3041]: E0120 02:04:23.792662 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:23.791430 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 20 02:04:23.805014 containerd[1641]: time="2026-01-20T02:04:23.804446838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 02:04:23.818000 audit[8471]: USER_START pid=8471 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.905249 kernel: audit: type=1105 audit(1768874663.818:1127): pid=8471 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.904000 audit[8474]: CRED_ACQ pid=8474 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.953537 kernel: audit: type=1103 audit(1768874663.904:1128): pid=8474 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:23.960574 containerd[1641]: time="2026-01-20T02:04:23.958304752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:23.963222 containerd[1641]: time="2026-01-20T02:04:23.963161795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:23.963591 containerd[1641]: time="2026-01-20T02:04:23.963449282Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 02:04:23.971528 kubelet[3041]: E0120 02:04:23.971415 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:04:23.971895 kubelet[3041]: E0120 02:04:23.971705 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 02:04:23.972613 kubelet[3041]: E0120 02:04:23.972417 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cc9877c8-b9ftn_calico-system(0ce5e731-d9ff-4094-add4-a475d64c6d24): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:23.972613 kubelet[3041]: E0120 02:04:23.972474 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:04:24.571589 sshd[8474]: Connection closed by 10.0.0.1 port 54486 Jan 20 02:04:24.567765 sshd-session[8471]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:24.577000 audit[8471]: USER_END pid=8471 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:24.670658 kernel: audit: type=1106 audit(1768874664.577:1129): pid=8471 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:24.670803 kernel: audit: type=1104 audit(1768874664.577:1130): pid=8471 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:24.577000 audit[8471]: CRED_DISP pid=8471 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:24.715481 systemd[1]: sshd@45-10.0.0.48:22-10.0.0.1:54486.service: Deactivated successfully. Jan 20 02:04:24.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-10.0.0.48:22-10.0.0.1:54486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:24.740698 systemd[1]: session-46.scope: Deactivated successfully. Jan 20 02:04:24.744032 systemd-logind[1624]: Session 46 logged out. Waiting for processes to exit. Jan 20 02:04:24.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.48:22-10.0.0.1:33012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:24.783327 systemd[1]: Started sshd@46-10.0.0.48:22-10.0.0.1:33012.service - OpenSSH per-connection server daemon (10.0.0.1:33012). Jan 20 02:04:24.792195 systemd-logind[1624]: Removed session 46. Jan 20 02:04:25.037232 sshd[8488]: Accepted publickey for core from 10.0.0.1 port 33012 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:25.031000 audit[8488]: USER_ACCT pid=8488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.066000 audit[8488]: CRED_ACQ pid=8488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.070000 audit[8488]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd15b58bf0 a2=3 a3=0 items=0 ppid=1 pid=8488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:25.070000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:25.080257 sshd-session[8488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:25.169638 systemd-logind[1624]: New session 47 of user core. Jan 20 02:04:25.233398 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 20 02:04:25.292000 audit[8488]: USER_START pid=8488 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.316000 audit[8491]: CRED_ACQ pid=8491 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:25.612708 kubelet[3041]: E0120 02:04:25.603837 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:04:27.146853 sshd[8491]: Connection closed by 10.0.0.1 port 33012 Jan 20 02:04:27.153639 sshd-session[8488]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:27.164000 audit[8488]: USER_END pid=8488 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:27.164000 audit[8488]: CRED_DISP pid=8488 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:27.190749 systemd[1]: sshd@46-10.0.0.48:22-10.0.0.1:33012.service: Deactivated successfully. Jan 20 02:04:27.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-10.0.0.48:22-10.0.0.1:33012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:27.214443 systemd[1]: session-47.scope: Deactivated successfully. Jan 20 02:04:27.224588 systemd-logind[1624]: Session 47 logged out. Waiting for processes to exit. Jan 20 02:04:27.244830 systemd-logind[1624]: Removed session 47. Jan 20 02:04:27.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.48:22-10.0.0.1:33014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:27.264729 systemd[1]: Started sshd@47-10.0.0.48:22-10.0.0.1:33014.service - OpenSSH per-connection server daemon (10.0.0.1:33014). Jan 20 02:04:27.775000 audit[8505]: USER_ACCT pid=8505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:27.778847 sshd[8505]: Accepted publickey for core from 10.0.0.1 port 33014 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:27.781000 audit[8505]: CRED_ACQ pid=8505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:27.781000 audit[8505]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc07306f40 a2=3 a3=0 items=0 ppid=1 pid=8505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:27.781000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:27.785712 sshd-session[8505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:27.840232 systemd-logind[1624]: New session 48 of user core. Jan 20 02:04:27.873755 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 20 02:04:27.891000 audit[8505]: USER_START pid=8505 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:27.916000 audit[8508]: CRED_ACQ pid=8508 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:29.536788 kubelet[3041]: E0120 02:04:29.532328 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:04:29.548859 containerd[1641]: time="2026-01-20T02:04:29.542503045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 02:04:29.691716 containerd[1641]: time="2026-01-20T02:04:29.691648220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:29.699850 containerd[1641]: time="2026-01-20T02:04:29.699776100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 02:04:29.715552 containerd[1641]: time="2026-01-20T02:04:29.700834169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:29.730574 kubelet[3041]: E0120 02:04:29.730473 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:04:29.730574 kubelet[3041]: E0120 02:04:29.730567 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 02:04:29.772632 kubelet[3041]: E0120 02:04:29.731318 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:29.772691 containerd[1641]: time="2026-01-20T02:04:29.748258321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 02:04:29.982479 containerd[1641]: time="2026-01-20T02:04:29.975734134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:29.991930 containerd[1641]: time="2026-01-20T02:04:29.989404952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 02:04:29.991930 containerd[1641]: time="2026-01-20T02:04:29.989566905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:29.992244 kubelet[3041]: E0120 02:04:29.990463 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:04:29.992244 kubelet[3041]: E0120 02:04:29.990674 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 02:04:29.992244 kubelet[3041]: E0120 02:04:29.991289 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-9gv2m_calico-system(ac1c9092-8cef-4868-9089-0927692efc39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:29.992244 kubelet[3041]: E0120 02:04:29.991418 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:04:30.543479 kubelet[3041]: E0120 02:04:30.540970 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:04:30.543479 kubelet[3041]: E0120 02:04:30.541088 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:04:32.165064 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 20 02:04:32.165310 kernel: audit: type=1325 audit(1768874672.119:1147): table=filter:139 family=2 entries=26 op=nft_register_rule pid=8531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:32.119000 audit[8531]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=8531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:32.119000 audit[8531]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffff05f0fe0 a2=0 a3=7ffff05f0fcc items=0 ppid=3158 pid=8531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:32.242938 kernel: audit: type=1300 audit(1768874672.119:1147): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffff05f0fe0 a2=0 a3=7ffff05f0fcc items=0 ppid=3158 pid=8531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:32.243089 kernel: audit: type=1327 audit(1768874672.119:1147): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:32.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:32.251000 audit[8531]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=8531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:32.295327 sshd[8508]: Connection closed by 10.0.0.1 port 33014 Jan 20 02:04:32.319145 kernel: audit: type=1325 audit(1768874672.251:1148): table=nat:140 family=2 entries=20 op=nft_register_rule pid=8531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:32.319256 kernel: audit: type=1300 audit(1768874672.251:1148): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff05f0fe0 a2=0 a3=0 items=0 ppid=3158 pid=8531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:32.251000 audit[8531]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff05f0fe0 a2=0 a3=0 items=0 ppid=3158 pid=8531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:32.308105 sshd-session[8505]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:32.331586 kernel: audit: type=1327 audit(1768874672.251:1148): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:32.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:32.345413 kernel: audit: type=1106 audit(1768874672.323:1149): pid=8505 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.323000 audit[8505]: USER_END pid=8505 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.325000 audit[8505]: CRED_DISP pid=8505 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.482182 kernel: audit: type=1104 audit(1768874672.325:1150): pid=8505 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:32.502038 systemd[1]: sshd@47-10.0.0.48:22-10.0.0.1:33014.service: Deactivated successfully. Jan 20 02:04:32.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.48:22-10.0.0.1:33014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:32.531460 systemd[1]: session-48.scope: Deactivated successfully. Jan 20 02:04:32.531905 systemd[1]: session-48.scope: Consumed 1.090s CPU time, 44M memory peak. Jan 20 02:04:32.536606 systemd-logind[1624]: Session 48 logged out. Waiting for processes to exit. Jan 20 02:04:32.550780 kernel: audit: type=1131 audit(1768874672.506:1151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-10.0.0.48:22-10.0.0.1:33014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:32.543323 systemd-logind[1624]: Removed session 48. Jan 20 02:04:32.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.48:22-10.0.0.1:33024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:32.568696 systemd[1]: Started sshd@48-10.0.0.48:22-10.0.0.1:33024.service - OpenSSH per-connection server daemon (10.0.0.1:33024). Jan 20 02:04:32.621435 kernel: audit: type=1130 audit(1768874672.567:1152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.48:22-10.0.0.1:33024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:33.246498 sshd[8536]: Accepted publickey for core from 10.0.0.1 port 33024 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:33.243000 audit[8536]: USER_ACCT pid=8536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.249000 audit[8536]: CRED_ACQ pid=8536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.249000 audit[8536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5cd0d850 a2=3 a3=0 items=0 ppid=1 pid=8536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:33.249000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:33.295085 sshd-session[8536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:33.347712 systemd-logind[1624]: New session 49 of user core. Jan 20 02:04:33.397824 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 20 02:04:33.414000 audit[8536]: USER_START pid=8536 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.430000 audit[8542]: CRED_ACQ pid=8542 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:33.526000 audit[8544]: NETFILTER_CFG table=filter:141 family=2 entries=38 op=nft_register_rule pid=8544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:33.526000 audit[8544]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe34f06ef0 a2=0 a3=7ffe34f06edc items=0 ppid=3158 pid=8544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:33.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:33.673000 audit[8544]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=8544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:04:33.673000 audit[8544]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe34f06ef0 a2=0 a3=0 items=0 ppid=3158 pid=8544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:33.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:04:34.535608 kubelet[3041]: E0120 02:04:34.532873 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:04:36.996780 sshd[8542]: Connection closed by 10.0.0.1 port 33024 Jan 20 02:04:37.005385 sshd-session[8536]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:37.093000 audit[8536]: USER_END pid=8536 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:37.093000 audit[8536]: CRED_DISP pid=8536 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:37.240914 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 20 02:04:37.241315 kernel: audit: type=1130 audit(1768874677.205:1162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.48:22-10.0.0.1:44756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:37.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.48:22-10.0.0.1:44756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:37.208949 systemd[1]: Started sshd@49-10.0.0.48:22-10.0.0.1:44756.service - OpenSSH per-connection server daemon (10.0.0.1:44756). Jan 20 02:04:37.210067 systemd[1]: sshd@48-10.0.0.48:22-10.0.0.1:33024.service: Deactivated successfully. Jan 20 02:04:37.227060 systemd[1]: session-49.scope: Deactivated successfully. Jan 20 02:04:37.279098 kernel: audit: type=1131 audit(1768874677.208:1163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.48:22-10.0.0.1:33024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:37.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-10.0.0.48:22-10.0.0.1:33024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:37.306844 systemd-logind[1624]: Session 49 logged out. Waiting for processes to exit. Jan 20 02:04:37.489655 systemd-logind[1624]: Removed session 49. Jan 20 02:04:37.878000 audit[8585]: USER_ACCT pid=8585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:37.886009 sshd[8585]: Accepted publickey for core from 10.0.0.1 port 44756 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:37.917155 kernel: audit: type=1101 audit(1768874677.878:1164): pid=8585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:37.925000 audit[8585]: CRED_ACQ pid=8585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:37.969895 kernel: audit: type=1103 audit(1768874677.925:1165): pid=8585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:37.970585 sshd-session[8585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:37.967000 audit[8585]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc8747bd0 a2=3 a3=0 items=0 ppid=1 pid=8585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:38.066061 kernel: audit: type=1006 audit(1768874677.967:1166): pid=8585 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jan 20 02:04:38.080754 kernel: audit: type=1300 audit(1768874677.967:1166): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc8747bd0 a2=3 a3=0 items=0 ppid=1 pid=8585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:37.967000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:38.102427 kernel: audit: type=1327 audit(1768874677.967:1166): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:38.101115 systemd-logind[1624]: New session 50 of user core. Jan 20 02:04:38.142114 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 20 02:04:38.180000 audit[8585]: USER_START pid=8585 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:38.262752 kernel: audit: type=1105 audit(1768874678.180:1167): pid=8585 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:38.202000 audit[8591]: CRED_ACQ pid=8591 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:38.312954 kernel: audit: type=1103 audit(1768874678.202:1168): pid=8591 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:38.561037 kubelet[3041]: E0120 02:04:38.556886 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:04:39.549151 sshd[8591]: Connection closed by 10.0.0.1 port 44756 Jan 20 02:04:39.583172 sshd-session[8585]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:39.652000 audit[8585]: USER_END pid=8585 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:39.737461 kernel: audit: type=1106 audit(1768874679.652:1169): pid=8585 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:39.702787 systemd[1]: sshd@49-10.0.0.48:22-10.0.0.1:44756.service: Deactivated successfully. Jan 20 02:04:39.660000 audit[8585]: CRED_DISP pid=8585 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:39.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-10.0.0.48:22-10.0.0.1:44756 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:39.764804 systemd[1]: session-50.scope: Deactivated successfully. Jan 20 02:04:39.780937 kubelet[3041]: E0120 02:04:39.780799 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:04:39.808983 systemd-logind[1624]: Session 50 logged out. Waiting for processes to exit. Jan 20 02:04:39.829257 systemd-logind[1624]: Removed session 50. Jan 20 02:04:41.589828 containerd[1641]: time="2026-01-20T02:04:41.589245802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 02:04:41.735512 containerd[1641]: time="2026-01-20T02:04:41.735443390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:41.747641 containerd[1641]: time="2026-01-20T02:04:41.747573066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 02:04:41.747917 containerd[1641]: time="2026-01-20T02:04:41.747824577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:41.756198 kubelet[3041]: E0120 02:04:41.752114 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:04:41.756198 kubelet[3041]: E0120 02:04:41.752171 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 02:04:41.756198 kubelet[3041]: E0120 02:04:41.752253 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7cb6ddc686-fcv7l_calico-system(ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:41.756198 kubelet[3041]: E0120 02:04:41.753241 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:04:43.580776 kubelet[3041]: E0120 02:04:43.580656 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:04:44.532588 kubelet[3041]: E0120 02:04:44.532475 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:04:44.542079 containerd[1641]: time="2026-01-20T02:04:44.540579345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:04:44.613736 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 20 02:04:44.613867 kernel: audit: type=1130 audit(1768874684.604:1172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.48:22-10.0.0.1:46910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:44.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.48:22-10.0.0.1:46910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:44.605765 systemd[1]: Started sshd@50-10.0.0.48:22-10.0.0.1:46910.service - OpenSSH per-connection server daemon (10.0.0.1:46910). Jan 20 02:04:44.694680 containerd[1641]: time="2026-01-20T02:04:44.692597764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:44.736435 containerd[1641]: time="2026-01-20T02:04:44.735622152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:04:44.736435 containerd[1641]: time="2026-01-20T02:04:44.736013545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:44.737836 kubelet[3041]: E0120 02:04:44.737734 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:04:44.737836 kubelet[3041]: E0120 02:04:44.737828 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:04:44.744415 kubelet[3041]: E0120 02:04:44.737924 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-s8m7m_calico-apiserver(15d966be-bae7-42a3-83b7-ced10b64bcb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:44.744415 kubelet[3041]: E0120 02:04:44.737968 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:04:44.877115 sshd[8605]: Accepted publickey for core from 10.0.0.1 port 46910 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:44.875000 audit[8605]: USER_ACCT pid=8605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:44.886445 sshd-session[8605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:44.884000 audit[8605]: CRED_ACQ pid=8605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:44.929550 kernel: audit: type=1101 audit(1768874684.875:1173): pid=8605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:44.930147 kernel: audit: type=1103 audit(1768874684.884:1174): pid=8605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:44.924726 systemd-logind[1624]: New session 51 of user core. Jan 20 02:04:44.969648 kernel: audit: type=1006 audit(1768874684.884:1175): pid=8605 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jan 20 02:04:44.884000 audit[8605]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0436dd40 a2=3 a3=0 items=0 ppid=1 pid=8605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:44.884000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:45.075146 systemd[1]: Started session-51.scope - Session 51 of User core. Jan 20 02:04:45.083893 kernel: audit: type=1300 audit(1768874684.884:1175): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0436dd40 a2=3 a3=0 items=0 ppid=1 pid=8605 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:45.084034 kernel: audit: type=1327 audit(1768874684.884:1175): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:45.113000 audit[8605]: USER_START pid=8605 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:45.178198 kernel: audit: type=1105 audit(1768874685.113:1176): pid=8605 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:45.178472 kernel: audit: type=1103 audit(1768874685.151:1177): pid=8608 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:45.151000 audit[8608]: CRED_ACQ pid=8608 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:45.990543 sshd[8608]: Connection closed by 10.0.0.1 port 46910 Jan 20 02:04:45.995697 sshd-session[8605]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:46.005000 audit[8605]: USER_END pid=8605 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:46.022840 systemd[1]: sshd@50-10.0.0.48:22-10.0.0.1:46910.service: Deactivated successfully. Jan 20 02:04:46.026287 systemd[1]: session-51.scope: Deactivated successfully. Jan 20 02:04:46.037234 systemd-logind[1624]: Session 51 logged out. Waiting for processes to exit. Jan 20 02:04:46.005000 audit[8605]: CRED_DISP pid=8605 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:46.053301 systemd-logind[1624]: Removed session 51. Jan 20 02:04:46.104408 kernel: audit: type=1106 audit(1768874686.005:1178): pid=8605 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:46.104566 kernel: audit: type=1104 audit(1768874686.005:1179): pid=8605 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:46.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-10.0.0.48:22-10.0.0.1:46910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:46.529660 containerd[1641]: time="2026-01-20T02:04:46.529541434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:04:46.709079 containerd[1641]: time="2026-01-20T02:04:46.698504991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:46.709079 containerd[1641]: time="2026-01-20T02:04:46.700716238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:04:46.709079 containerd[1641]: time="2026-01-20T02:04:46.700791479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:46.709469 kubelet[3041]: E0120 02:04:46.700924 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:04:46.709469 kubelet[3041]: E0120 02:04:46.700972 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:04:46.709469 kubelet[3041]: E0120 02:04:46.701058 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6576c69f97-z9lbz_calico-apiserver(e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:46.709469 kubelet[3041]: E0120 02:04:46.701168 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:04:47.534720 kubelet[3041]: E0120 02:04:47.533089 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:50.542895 kubelet[3041]: E0120 02:04:50.541199 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:04:50.544808 kubelet[3041]: E0120 02:04:50.544175 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:04:51.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.48:22-10.0.0.1:46924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:51.103088 systemd[1]: Started sshd@51-10.0.0.48:22-10.0.0.1:46924.service - OpenSSH per-connection server daemon (10.0.0.1:46924). Jan 20 02:04:51.168561 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:51.168710 kernel: audit: type=1130 audit(1768874691.098:1181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.48:22-10.0.0.1:46924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:51.446000 audit[8621]: USER_ACCT pid=8621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:51.497844 sshd[8621]: Accepted publickey for core from 10.0.0.1 port 46924 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:51.525088 kernel: audit: type=1101 audit(1768874691.446:1182): pid=8621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:51.525220 kernel: audit: type=1103 audit(1768874691.518:1183): pid=8621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:51.518000 audit[8621]: CRED_ACQ pid=8621 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:51.521087 sshd-session[8621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:51.547809 systemd-logind[1624]: New session 52 of user core. Jan 20 02:04:51.630993 kernel: audit: type=1006 audit(1768874691.519:1184): pid=8621 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Jan 20 02:04:51.631145 kernel: audit: type=1300 audit(1768874691.519:1184): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0aad02e0 a2=3 a3=0 items=0 ppid=1 pid=8621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:51.519000 audit[8621]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0aad02e0 a2=3 a3=0 items=0 ppid=1 pid=8621 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:51.636673 systemd[1]: Started session-52.scope - Session 52 of User core. Jan 20 02:04:51.519000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:51.699844 kernel: audit: type=1327 audit(1768874691.519:1184): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:51.721291 kernel: audit: type=1105 audit(1768874691.703:1185): pid=8621 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:51.703000 audit[8621]: USER_START pid=8621 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:51.778143 kernel: audit: type=1103 audit(1768874691.734:1186): pid=8624 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:51.734000 audit[8624]: CRED_ACQ pid=8624 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:52.546683 kubelet[3041]: E0120 02:04:52.542154 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:04:52.633675 sshd[8624]: Connection closed by 10.0.0.1 port 46924 Jan 20 02:04:52.646651 sshd-session[8621]: pam_unix(sshd:session): session closed for user core Jan 20 02:04:52.649000 audit[8621]: USER_END pid=8621 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:52.680572 systemd-logind[1624]: Session 52 logged out. Waiting for processes to exit. Jan 20 02:04:52.690023 systemd[1]: sshd@51-10.0.0.48:22-10.0.0.1:46924.service: Deactivated successfully. Jan 20 02:04:52.697700 systemd[1]: session-52.scope: Deactivated successfully. Jan 20 02:04:52.755935 kernel: audit: type=1106 audit(1768874692.649:1187): pid=8621 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:52.763622 kernel: audit: type=1104 audit(1768874692.649:1188): pid=8621 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:52.649000 audit[8621]: CRED_DISP pid=8621 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:52.735176 systemd-logind[1624]: Removed session 52. Jan 20 02:04:52.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-10.0.0.48:22-10.0.0.1:46924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:54.556681 kubelet[3041]: E0120 02:04:54.542895 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:04:58.024697 kubelet[3041]: E0120 02:04:58.023117 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:04:58.053961 kubelet[3041]: E0120 02:04:58.037070 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:04:58.223189 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:04:58.223396 kernel: audit: type=1130 audit(1768874698.177:1190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.48:22-10.0.0.1:39362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:58.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.48:22-10.0.0.1:39362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:04:58.177923 systemd[1]: Started sshd@52-10.0.0.48:22-10.0.0.1:39362.service - OpenSSH per-connection server daemon (10.0.0.1:39362). Jan 20 02:04:58.536519 kubelet[3041]: E0120 02:04:58.533753 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:04:58.683000 audit[8640]: USER_ACCT pid=8640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:58.718520 sshd[8640]: Accepted publickey for core from 10.0.0.1 port 39362 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:04:58.736109 sshd-session[8640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:04:58.771813 kernel: audit: type=1101 audit(1768874698.683:1191): pid=8640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:58.706000 audit[8640]: CRED_ACQ pid=8640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:58.831050 kernel: audit: type=1103 audit(1768874698.706:1192): pid=8640 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:58.910617 kernel: audit: type=1006 audit(1768874698.706:1193): pid=8640 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Jan 20 02:04:58.910761 kernel: audit: type=1300 audit(1768874698.706:1193): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe14479c70 a2=3 a3=0 items=0 ppid=1 pid=8640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:58.706000 audit[8640]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe14479c70 a2=3 a3=0 items=0 ppid=1 pid=8640 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:04:58.916462 systemd-logind[1624]: New session 53 of user core. Jan 20 02:04:58.706000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:58.971936 kernel: audit: type=1327 audit(1768874698.706:1193): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:04:58.983782 systemd[1]: Started session-53.scope - Session 53 of User core. Jan 20 02:04:59.070283 kernel: audit: type=1105 audit(1768874699.013:1194): pid=8640 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:59.013000 audit[8640]: USER_START pid=8640 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:59.075000 audit[8643]: CRED_ACQ pid=8643 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:59.109433 kernel: audit: type=1103 audit(1768874699.075:1195): pid=8643 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:04:59.580653 containerd[1641]: time="2026-01-20T02:04:59.579539354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 02:04:59.724901 containerd[1641]: time="2026-01-20T02:04:59.724630223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:04:59.765006 containerd[1641]: time="2026-01-20T02:04:59.750475187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 02:04:59.765313 containerd[1641]: time="2026-01-20T02:04:59.764797187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 02:04:59.773173 kubelet[3041]: E0120 02:04:59.771073 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:04:59.773173 kubelet[3041]: E0120 02:04:59.771140 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 02:04:59.773173 kubelet[3041]: E0120 02:04:59.771234 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-pxsrr_calico-system(738fa74e-ddb6-4c59-8db5-d8c8658e06b6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 02:04:59.773173 kubelet[3041]: E0120 02:04:59.771278 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:04:59.968308 sshd[8643]: Connection closed by 10.0.0.1 port 39362 Jan 20 02:04:59.974721 sshd-session[8640]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:00.000000 audit[8640]: USER_END pid=8640 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:00.076712 kernel: audit: type=1106 audit(1768874700.000:1196): pid=8640 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:00.077791 systemd[1]: sshd@52-10.0.0.48:22-10.0.0.1:39362.service: Deactivated successfully. Jan 20 02:05:00.015000 audit[8640]: CRED_DISP pid=8640 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:00.101573 systemd[1]: session-53.scope: Deactivated successfully. Jan 20 02:05:00.115904 kernel: audit: type=1104 audit(1768874700.015:1197): pid=8640 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:00.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-10.0.0.48:22-10.0.0.1:39362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:00.122911 systemd-logind[1624]: Session 53 logged out. Waiting for processes to exit. Jan 20 02:05:00.140125 systemd-logind[1624]: Removed session 53. Jan 20 02:05:01.549114 kubelet[3041]: E0120 02:05:01.527497 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:03.600794 kubelet[3041]: E0120 02:05:03.600125 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:05:04.534677 kubelet[3041]: E0120 02:05:04.532520 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:04.579832 containerd[1641]: time="2026-01-20T02:05:04.566944519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 02:05:04.901161 containerd[1641]: time="2026-01-20T02:05:04.899861456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 02:05:04.925027 containerd[1641]: time="2026-01-20T02:05:04.922573170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 02:05:04.925027 containerd[1641]: time="2026-01-20T02:05:04.922709716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 02:05:04.925259 kubelet[3041]: E0120 02:05:04.922910 3041 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:05:04.925259 kubelet[3041]: E0120 02:05:04.922959 3041 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 02:05:04.925259 kubelet[3041]: E0120 02:05:04.923046 3041 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-776d4dc5d4-8w9l5_calico-apiserver(57304ae8-4142-4837-ab19-941e654eb081): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 02:05:04.925259 kubelet[3041]: E0120 02:05:04.923090 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:05:05.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.48:22-10.0.0.1:58370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:05.167984 systemd[1]: Started sshd@53-10.0.0.48:22-10.0.0.1:58370.service - OpenSSH per-connection server daemon (10.0.0.1:58370). Jan 20 02:05:05.198448 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:05.198670 kernel: audit: type=1130 audit(1768874705.167:1199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.48:22-10.0.0.1:58370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:05.800000 audit[8683]: USER_ACCT pid=8683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:05.812501 sshd[8683]: Accepted publickey for core from 10.0.0.1 port 58370 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:05:05.855433 kernel: audit: type=1101 audit(1768874705.800:1200): pid=8683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:05.916414 kernel: audit: type=1103 audit(1768874705.863:1201): pid=8683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:05.916548 kernel: audit: type=1006 audit(1768874705.863:1202): pid=8683 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=54 res=1 Jan 20 02:05:05.863000 audit[8683]: CRED_ACQ pid=8683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:05.915844 systemd-logind[1624]: New session 54 of user core. Jan 20 02:05:05.869869 sshd-session[8683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:05.863000 audit[8683]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1b3fa1b0 a2=3 a3=0 items=0 ppid=1 pid=8683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:05.986513 kernel: audit: type=1300 audit(1768874705.863:1202): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1b3fa1b0 a2=3 a3=0 items=0 ppid=1 pid=8683 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:05.998957 kernel: audit: type=1327 audit(1768874705.863:1202): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:05.863000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:06.032453 systemd[1]: Started session-54.scope - Session 54 of User core. Jan 20 02:05:06.140205 kernel: audit: type=1105 audit(1768874706.052:1203): pid=8683 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:06.052000 audit[8683]: USER_START pid=8683 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:06.085000 audit[8686]: CRED_ACQ pid=8686 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:06.228942 kernel: audit: type=1103 audit(1768874706.085:1204): pid=8686 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:07.157272 sshd[8686]: Connection closed by 10.0.0.1 port 58370 Jan 20 02:05:07.149185 sshd-session[8683]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:07.151000 audit[8683]: USER_END pid=8683 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:07.191691 systemd[1]: sshd@53-10.0.0.48:22-10.0.0.1:58370.service: Deactivated successfully. Jan 20 02:05:07.205992 systemd[1]: session-54.scope: Deactivated successfully. Jan 20 02:05:07.217299 systemd-logind[1624]: Session 54 logged out. Waiting for processes to exit. Jan 20 02:05:07.230646 kernel: audit: type=1106 audit(1768874707.151:1205): pid=8683 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:07.151000 audit[8683]: CRED_DISP pid=8683 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:07.241991 systemd-logind[1624]: Removed session 54. Jan 20 02:05:07.273598 kernel: audit: type=1104 audit(1768874707.151:1206): pid=8683 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:07.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-10.0.0.48:22-10.0.0.1:58370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:07.539680 kubelet[3041]: E0120 02:05:07.536819 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:07.558124 kubelet[3041]: E0120 02:05:07.554816 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:05:07.577783 kubelet[3041]: E0120 02:05:07.576729 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:05:15.129626 systemd[1]: Started sshd@54-10.0.0.48:22-10.0.0.1:58376.service - OpenSSH per-connection server daemon (10.0.0.1:58376). Jan 20 02:05:15.344105 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:15.345221 kernel: audit: type=1130 audit(1768874715.142:1208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.48:22-10.0.0.1:58376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:15.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.48:22-10.0.0.1:58376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:16.683658 kubelet[3041]: E0120 02:05:16.623874 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:05:17.712270 kubelet[3041]: E0120 02:05:17.433451 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:05:17.907802 kubelet[3041]: E0120 02:05:17.906116 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.206s" Jan 20 02:05:17.933446 kubelet[3041]: E0120 02:05:17.924667 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:17.933446 kubelet[3041]: E0120 02:05:17.932893 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:05:17.939033 kubelet[3041]: E0120 02:05:17.938906 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:05:17.947906 kubelet[3041]: E0120 02:05:17.947156 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:05:18.282000 audit[8700]: USER_ACCT pid=8700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.319015 sshd[8700]: Accepted publickey for core from 10.0.0.1 port 58376 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:05:18.322580 kernel: audit: type=1101 audit(1768874718.282:1209): pid=8700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.325775 sshd-session[8700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:18.319000 audit[8700]: CRED_ACQ pid=8700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.492657 kernel: audit: type=1103 audit(1768874718.319:1210): pid=8700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.492796 kernel: audit: type=1006 audit(1768874718.319:1211): pid=8700 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=55 res=1 Jan 20 02:05:18.492838 kernel: audit: type=1300 audit(1768874718.319:1211): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7422470 a2=3 a3=0 items=0 ppid=1 pid=8700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:18.319000 audit[8700]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7422470 a2=3 a3=0 items=0 ppid=1 pid=8700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:18.514139 systemd-logind[1624]: New session 55 of user core. Jan 20 02:05:18.569635 kernel: audit: type=1327 audit(1768874718.319:1211): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:18.319000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:18.569952 kubelet[3041]: E0120 02:05:18.569098 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:05:18.615236 systemd[1]: Started session-55.scope - Session 55 of User core. Jan 20 02:05:18.643000 audit[8700]: USER_START pid=8700 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.728765 kernel: audit: type=1105 audit(1768874718.643:1212): pid=8700 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.728900 kernel: audit: type=1103 audit(1768874718.666:1213): pid=8703 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:18.666000 audit[8703]: CRED_ACQ pid=8703 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:19.766674 sshd[8703]: Connection closed by 10.0.0.1 port 58376 Jan 20 02:05:19.795877 sshd-session[8700]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:19.831000 audit[8700]: USER_END pid=8700 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:19.944689 kernel: audit: type=1106 audit(1768874719.831:1214): pid=8700 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:19.950024 systemd[1]: sshd@54-10.0.0.48:22-10.0.0.1:58376.service: Deactivated successfully. Jan 20 02:05:19.831000 audit[8700]: CRED_DISP pid=8700 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:19.984935 systemd[1]: session-55.scope: Deactivated successfully. Jan 20 02:05:20.011748 systemd-logind[1624]: Session 55 logged out. Waiting for processes to exit. Jan 20 02:05:20.045479 kernel: audit: type=1104 audit(1768874719.831:1215): pid=8700 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:20.015199 systemd-logind[1624]: Removed session 55. Jan 20 02:05:19.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10.0.0.48:22-10.0.0.1:58376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:22.583205 kubelet[3041]: E0120 02:05:22.583088 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:05:24.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.48:22-10.0.0.1:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:24.820881 systemd[1]: Started sshd@55-10.0.0.48:22-10.0.0.1:43628.service - OpenSSH per-connection server daemon (10.0.0.1:43628). Jan 20 02:05:24.852917 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:24.853089 kernel: audit: type=1130 audit(1768874724.819:1217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.48:22-10.0.0.1:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:25.145000 audit[8730]: USER_ACCT pid=8730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:25.167894 sshd[8730]: Accepted publickey for core from 10.0.0.1 port 43628 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:05:25.173228 sshd-session[8730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:25.167000 audit[8730]: CRED_ACQ pid=8730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:25.232673 systemd-logind[1624]: New session 56 of user core. Jan 20 02:05:25.275324 kernel: audit: type=1101 audit(1768874725.145:1218): pid=8730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:25.275599 kernel: audit: type=1103 audit(1768874725.167:1219): pid=8730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:25.167000 audit[8730]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2f93fb90 a2=3 a3=0 items=0 ppid=1 pid=8730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:25.319814 kernel: audit: type=1006 audit(1768874725.167:1220): pid=8730 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=56 res=1 Jan 20 02:05:25.319877 kernel: audit: type=1300 audit(1768874725.167:1220): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2f93fb90 a2=3 a3=0 items=0 ppid=1 pid=8730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:25.407326 kernel: audit: type=1327 audit(1768874725.167:1220): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:25.167000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:25.393780 systemd[1]: Started session-56.scope - Session 56 of User core. Jan 20 02:05:25.414000 audit[8730]: USER_START pid=8730 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:25.493571 kernel: audit: type=1105 audit(1768874725.414:1221): pid=8730 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:25.512000 audit[8733]: CRED_ACQ pid=8733 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:25.568595 kernel: audit: type=1103 audit(1768874725.512:1222): pid=8733 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.399037 sshd[8733]: Connection closed by 10.0.0.1 port 43628 Jan 20 02:05:26.401267 sshd-session[8730]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:26.409000 audit[8730]: USER_END pid=8730 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.429219 systemd[1]: sshd@55-10.0.0.48:22-10.0.0.1:43628.service: Deactivated successfully. Jan 20 02:05:26.441553 systemd[1]: session-56.scope: Deactivated successfully. Jan 20 02:05:26.472192 kernel: audit: type=1106 audit(1768874726.409:1223): pid=8730 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.473090 kernel: audit: type=1104 audit(1768874726.411:1224): pid=8730 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.411000 audit[8730]: CRED_DISP pid=8730 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:26.475628 systemd-logind[1624]: Session 56 logged out. Waiting for processes to exit. Jan 20 02:05:26.493267 systemd-logind[1624]: Removed session 56. Jan 20 02:05:26.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-10.0.0.48:22-10.0.0.1:43628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:27.571968 kubelet[3041]: E0120 02:05:27.567049 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:05:29.563303 kubelet[3041]: E0120 02:05:29.550866 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:05:30.497469 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:30.500638 kernel: audit: type=1325 audit(1768874730.480:1226): table=filter:143 family=2 entries=26 op=nft_register_rule pid=8755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:05:30.480000 audit[8755]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=8755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:05:30.480000 audit[8755]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc79fd9810 a2=0 a3=7ffc79fd97fc items=0 ppid=3158 pid=8755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:30.536993 kubelet[3041]: E0120 02:05:30.533653 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:05:30.571529 kernel: audit: type=1300 audit(1768874730.480:1226): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc79fd9810 a2=0 a3=7ffc79fd97fc items=0 ppid=3158 pid=8755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:30.571690 kernel: audit: type=1327 audit(1768874730.480:1226): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:05:30.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:05:30.614000 audit[8755]: NETFILTER_CFG table=nat:144 family=2 entries=104 op=nft_register_chain pid=8755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:05:30.640863 kernel: audit: type=1325 audit(1768874730.614:1227): table=nat:144 family=2 entries=104 op=nft_register_chain pid=8755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 02:05:30.614000 audit[8755]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc79fd9810 a2=0 a3=7ffc79fd97fc items=0 ppid=3158 pid=8755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:30.714072 kernel: audit: type=1300 audit(1768874730.614:1227): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc79fd9810 a2=0 a3=7ffc79fd97fc items=0 ppid=3158 pid=8755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:30.614000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:05:30.754781 kernel: audit: type=1327 audit(1768874730.614:1227): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 02:05:31.537149 systemd[1]: Started sshd@56-10.0.0.48:22-10.0.0.1:43634.service - OpenSSH per-connection server daemon (10.0.0.1:43634). Jan 20 02:05:31.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.48:22-10.0.0.1:43634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:31.598483 kernel: audit: type=1130 audit(1768874731.535:1228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.48:22-10.0.0.1:43634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:31.824450 sshd[8757]: Accepted publickey for core from 10.0.0.1 port 43634 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:05:31.822000 audit[8757]: USER_ACCT pid=8757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:31.851429 kernel: audit: type=1101 audit(1768874731.822:1229): pid=8757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:31.845000 audit[8757]: CRED_ACQ pid=8757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:31.859022 sshd-session[8757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:31.866237 kernel: audit: type=1103 audit(1768874731.845:1230): pid=8757 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:31.845000 audit[8757]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd635401b0 a2=3 a3=0 items=0 ppid=1 pid=8757 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:31.876262 kernel: audit: type=1006 audit(1768874731.845:1231): pid=8757 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=57 res=1 Jan 20 02:05:31.845000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:31.899495 systemd-logind[1624]: New session 57 of user core. Jan 20 02:05:31.948481 systemd[1]: Started session-57.scope - Session 57 of User core. Jan 20 02:05:31.965000 audit[8757]: USER_START pid=8757 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:31.987000 audit[8760]: CRED_ACQ pid=8760 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:32.543423 kubelet[3041]: E0120 02:05:32.538965 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:05:32.625209 sshd[8760]: Connection closed by 10.0.0.1 port 43634 Jan 20 02:05:32.620798 sshd-session[8757]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:32.647000 audit[8757]: USER_END pid=8757 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:32.647000 audit[8757]: CRED_DISP pid=8757 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:32.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-10.0.0.48:22-10.0.0.1:43634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:32.662166 systemd[1]: sshd@56-10.0.0.48:22-10.0.0.1:43634.service: Deactivated successfully. Jan 20 02:05:32.672140 systemd[1]: session-57.scope: Deactivated successfully. Jan 20 02:05:32.687555 systemd-logind[1624]: Session 57 logged out. Waiting for processes to exit. Jan 20 02:05:32.701433 systemd-logind[1624]: Removed session 57. Jan 20 02:05:33.554812 kubelet[3041]: E0120 02:05:33.550251 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:05:33.554812 kubelet[3041]: E0120 02:05:33.550462 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:05:35.576770 kubelet[3041]: E0120 02:05:35.573761 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:36.606933 kubelet[3041]: E0120 02:05:36.593330 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:05:37.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.48:22-10.0.0.1:57148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:37.761574 systemd[1]: Started sshd@57-10.0.0.48:22-10.0.0.1:57148.service - OpenSSH per-connection server daemon (10.0.0.1:57148). Jan 20 02:05:37.783021 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 02:05:37.783177 kernel: audit: type=1130 audit(1768874737.760:1237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.48:22-10.0.0.1:57148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:38.171000 audit[8802]: USER_ACCT pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:38.175542 sshd[8802]: Accepted publickey for core from 10.0.0.1 port 57148 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:05:38.177907 sshd-session[8802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:38.232961 systemd-logind[1624]: New session 58 of user core. Jan 20 02:05:38.249885 kernel: audit: type=1101 audit(1768874738.171:1238): pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:38.174000 audit[8802]: CRED_ACQ pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:38.325412 kernel: audit: type=1103 audit(1768874738.174:1239): pid=8802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:38.333396 systemd[1]: Started session-58.scope - Session 58 of User core. Jan 20 02:05:38.174000 audit[8802]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff63242c50 a2=3 a3=0 items=0 ppid=1 pid=8802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:38.446229 kernel: audit: type=1006 audit(1768874738.174:1240): pid=8802 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=58 res=1 Jan 20 02:05:38.446474 kernel: audit: type=1300 audit(1768874738.174:1240): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff63242c50 a2=3 a3=0 items=0 ppid=1 pid=8802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:38.174000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:38.480241 kernel: audit: type=1327 audit(1768874738.174:1240): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:38.447000 audit[8802]: USER_START pid=8802 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:38.482000 audit[8805]: CRED_ACQ pid=8805 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:38.607734 kernel: audit: type=1105 audit(1768874738.447:1241): pid=8802 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:38.607889 kernel: audit: type=1103 audit(1768874738.482:1242): pid=8805 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:39.157279 sshd[8805]: Connection closed by 10.0.0.1 port 57148 Jan 20 02:05:39.158718 sshd-session[8802]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:39.166000 audit[8802]: USER_END pid=8802 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:39.176846 systemd-logind[1624]: Session 58 logged out. Waiting for processes to exit. Jan 20 02:05:39.177687 systemd[1]: sshd@57-10.0.0.48:22-10.0.0.1:57148.service: Deactivated successfully. Jan 20 02:05:39.187309 systemd[1]: session-58.scope: Deactivated successfully. Jan 20 02:05:39.217413 systemd-logind[1624]: Removed session 58. Jan 20 02:05:39.224237 kernel: audit: type=1106 audit(1768874739.166:1243): pid=8802 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:39.168000 audit[8802]: CRED_DISP pid=8802 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:39.283001 kernel: audit: type=1104 audit(1768874739.168:1244): pid=8802 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:39.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-10.0.0.48:22-10.0.0.1:57148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:41.532421 kubelet[3041]: E0120 02:05:41.531147 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:05:41.540464 kubelet[3041]: E0120 02:05:41.539871 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:05:41.540464 kubelet[3041]: E0120 02:05:41.540158 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:05:44.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.48:22-10.0.0.1:57154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:44.307392 systemd[1]: Started sshd@58-10.0.0.48:22-10.0.0.1:57154.service - OpenSSH per-connection server daemon (10.0.0.1:57154). Jan 20 02:05:44.396590 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:44.396740 kernel: audit: type=1130 audit(1768874744.306:1246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.48:22-10.0.0.1:57154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:44.578516 kubelet[3041]: E0120 02:05:44.559297 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:05:44.578516 kubelet[3041]: E0120 02:05:44.559840 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:05:44.776000 audit[8819]: USER_ACCT pid=8819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:44.797666 sshd[8819]: Accepted publickey for core from 10.0.0.1 port 57154 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:05:44.798620 sshd-session[8819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:44.789000 audit[8819]: CRED_ACQ pid=8819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:44.838966 systemd-logind[1624]: New session 59 of user core. Jan 20 02:05:44.888135 kernel: audit: type=1101 audit(1768874744.776:1247): pid=8819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:44.888324 kernel: audit: type=1103 audit(1768874744.789:1248): pid=8819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:44.789000 audit[8819]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0236a820 a2=3 a3=0 items=0 ppid=1 pid=8819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:44.993314 kernel: audit: type=1006 audit(1768874744.789:1249): pid=8819 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=59 res=1 Jan 20 02:05:44.993509 kernel: audit: type=1300 audit(1768874744.789:1249): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0236a820 a2=3 a3=0 items=0 ppid=1 pid=8819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:44.993549 kernel: audit: type=1327 audit(1768874744.789:1249): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:44.789000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:44.996686 systemd[1]: Started session-59.scope - Session 59 of User core. Jan 20 02:05:45.056000 audit[8819]: USER_START pid=8819 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:45.081000 audit[8822]: CRED_ACQ pid=8822 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:45.207772 kernel: audit: type=1105 audit(1768874745.056:1250): pid=8819 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:45.207922 kernel: audit: type=1103 audit(1768874745.081:1251): pid=8822 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:46.324896 sshd[8822]: Connection closed by 10.0.0.1 port 57154 Jan 20 02:05:46.331755 sshd-session[8819]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:46.332000 audit[8819]: USER_END pid=8819 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:46.344416 systemd[1]: sshd@58-10.0.0.48:22-10.0.0.1:57154.service: Deactivated successfully. Jan 20 02:05:46.347543 systemd-logind[1624]: Session 59 logged out. Waiting for processes to exit. Jan 20 02:05:46.349744 systemd[1]: session-59.scope: Deactivated successfully. Jan 20 02:05:46.365295 systemd-logind[1624]: Removed session 59. Jan 20 02:05:46.432208 kernel: audit: type=1106 audit(1768874746.332:1252): pid=8819 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:46.432439 kernel: audit: type=1104 audit(1768874746.332:1253): pid=8819 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:46.332000 audit[8819]: CRED_DISP pid=8819 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:46.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-10.0.0.48:22-10.0.0.1:57154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:47.530722 kubelet[3041]: E0120 02:05:47.529631 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:05:47.571882 kubelet[3041]: E0120 02:05:47.571770 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:05:51.397526 systemd[1]: Started sshd@59-10.0.0.48:22-10.0.0.1:34924.service - OpenSSH per-connection server daemon (10.0.0.1:34924). Jan 20 02:05:51.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.48:22-10.0.0.1:34924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:51.462472 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:51.462644 kernel: audit: type=1130 audit(1768874751.398:1255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.48:22-10.0.0.1:34924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:51.582320 kubelet[3041]: E0120 02:05:51.582264 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:05:51.764000 audit[8835]: USER_ACCT pid=8835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:51.768694 sshd[8835]: Accepted publickey for core from 10.0.0.1 port 34924 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:05:51.771254 sshd-session[8835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:05:51.765000 audit[8835]: CRED_ACQ pid=8835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:51.875947 kernel: audit: type=1101 audit(1768874751.764:1256): pid=8835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:51.876150 kernel: audit: type=1103 audit(1768874751.765:1257): pid=8835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:51.876206 kernel: audit: type=1006 audit(1768874751.765:1258): pid=8835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=60 res=1 Jan 20 02:05:51.876243 kernel: audit: type=1300 audit(1768874751.765:1258): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0830a690 a2=3 a3=0 items=0 ppid=1 pid=8835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:51.765000 audit[8835]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0830a690 a2=3 a3=0 items=0 ppid=1 pid=8835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:05:51.833440 systemd-logind[1624]: New session 60 of user core. Jan 20 02:05:51.926042 kernel: audit: type=1327 audit(1768874751.765:1258): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:51.765000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:05:51.935853 systemd[1]: Started session-60.scope - Session 60 of User core. Jan 20 02:05:51.974000 audit[8835]: USER_START pid=8835 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:52.077908 kernel: audit: type=1105 audit(1768874751.974:1259): pid=8835 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:52.081000 audit[8838]: CRED_ACQ pid=8838 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:52.185032 kernel: audit: type=1103 audit(1768874752.081:1260): pid=8838 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:53.562974 kubelet[3041]: E0120 02:05:53.546698 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:05:53.711639 sshd[8838]: Connection closed by 10.0.0.1 port 34924 Jan 20 02:05:53.727121 sshd-session[8835]: pam_unix(sshd:session): session closed for user core Jan 20 02:05:53.895512 kernel: audit: type=1106 audit(1768874753.794:1261): pid=8835 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:53.794000 audit[8835]: USER_END pid=8835 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:53.921029 systemd[1]: sshd@59-10.0.0.48:22-10.0.0.1:34924.service: Deactivated successfully. Jan 20 02:05:54.035546 kernel: audit: type=1104 audit(1768874753.795:1262): pid=8835 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:53.795000 audit[8835]: CRED_DISP pid=8835 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:05:53.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-10.0.0.48:22-10.0.0.1:34924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:54.007408 systemd[1]: session-60.scope: Deactivated successfully. Jan 20 02:05:54.119303 systemd-logind[1624]: Session 60 logged out. Waiting for processes to exit. Jan 20 02:05:54.151916 systemd-logind[1624]: Removed session 60. Jan 20 02:05:55.646162 kubelet[3041]: E0120 02:05:55.638611 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:05:56.564162 kubelet[3041]: E0120 02:05:56.545013 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:05:58.540740 kubelet[3041]: E0120 02:05:58.540191 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:05:58.617452 kubelet[3041]: E0120 02:05:58.614885 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:05:58.903752 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:05:58.919737 kernel: audit: type=1130 audit(1768874758.877:1264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.48:22-10.0.0.1:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:58.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.48:22-10.0.0.1:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:05:58.878118 systemd[1]: Started sshd@60-10.0.0.48:22-10.0.0.1:57998.service - OpenSSH per-connection server daemon (10.0.0.1:57998). Jan 20 02:05:59.604918 kubelet[3041]: E0120 02:05:59.604581 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:06:00.182842 sshd[8854]: Accepted publickey for core from 10.0.0.1 port 57998 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:06:00.182000 audit[8854]: USER_ACCT pid=8854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:00.204930 sshd-session[8854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:00.203000 audit[8854]: CRED_ACQ pid=8854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:00.273867 kernel: audit: type=1101 audit(1768874760.182:1265): pid=8854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:00.273981 kernel: audit: type=1103 audit(1768874760.203:1266): pid=8854 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:00.260179 systemd-logind[1624]: New session 61 of user core. Jan 20 02:06:00.336681 kernel: audit: type=1006 audit(1768874760.203:1267): pid=8854 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=61 res=1 Jan 20 02:06:00.387296 kernel: audit: type=1300 audit(1768874760.203:1267): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc86705d00 a2=3 a3=0 items=0 ppid=1 pid=8854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:06:00.203000 audit[8854]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc86705d00 a2=3 a3=0 items=0 ppid=1 pid=8854 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:06:00.203000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:06:00.447609 systemd[1]: Started session-61.scope - Session 61 of User core. Jan 20 02:06:00.484587 kernel: audit: type=1327 audit(1768874760.203:1267): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:06:00.537000 audit[8854]: USER_START pid=8854 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:00.677580 kernel: audit: type=1105 audit(1768874760.537:1268): pid=8854 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:00.677806 kernel: audit: type=1103 audit(1768874760.573:1269): pid=8857 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:00.573000 audit[8857]: CRED_ACQ pid=8857 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:01.538190 kubelet[3041]: E0120 02:06:01.533496 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:02.096685 sshd[8857]: Connection closed by 10.0.0.1 port 57998 Jan 20 02:06:02.093487 sshd-session[8854]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:02.095000 audit[8854]: USER_END pid=8854 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:02.151919 systemd[1]: sshd@60-10.0.0.48:22-10.0.0.1:57998.service: Deactivated successfully. Jan 20 02:06:02.210265 kernel: audit: type=1106 audit(1768874762.095:1270): pid=8854 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:02.210400 kernel: audit: type=1104 audit(1768874762.095:1271): pid=8854 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:02.095000 audit[8854]: CRED_DISP pid=8854 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:02.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-10.0.0.48:22-10.0.0.1:57998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:06:02.189877 systemd[1]: session-61.scope: Deactivated successfully. Jan 20 02:06:02.230091 systemd-logind[1624]: Session 61 logged out. Waiting for processes to exit. Jan 20 02:06:02.265647 systemd-logind[1624]: Removed session 61. Jan 20 02:06:05.743618 kubelet[3041]: E0120 02:06:05.743524 3041 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.217s" Jan 20 02:06:05.785702 kubelet[3041]: E0120 02:06:05.785566 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:07.058294 kubelet[3041]: E0120 02:06:07.057603 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-z9lbz" podUID="e90a4c3c-c5d4-4dd1-99b0-686f2634a4b0" Jan 20 02:06:07.095494 kubelet[3041]: E0120 02:06:07.091596 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9gv2m" podUID="ac1c9092-8cef-4868-9089-0927692efc39" Jan 20 02:06:07.221665 systemd[1]: Started sshd@61-10.0.0.48:22-10.0.0.1:43504.service - OpenSSH per-connection server daemon (10.0.0.1:43504). Jan 20 02:06:07.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.48:22-10.0.0.1:43504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:06:07.270574 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:06:07.270721 kernel: audit: type=1130 audit(1768874767.221:1273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.48:22-10.0.0.1:43504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:06:07.597265 kubelet[3041]: E0120 02:06:07.595282 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6" Jan 20 02:06:07.713091 kubelet[3041]: E0120 02:06:07.595924 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cc9877c8-b9ftn" podUID="0ce5e731-d9ff-4094-add4-a475d64c6d24" Jan 20 02:06:08.581000 audit[8895]: USER_ACCT pid=8895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:08.599157 sshd[8895]: Accepted publickey for core from 10.0.0.1 port 43504 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:06:08.625877 sshd-session[8895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:08.667435 kernel: audit: type=1101 audit(1768874768.581:1274): pid=8895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:08.615000 audit[8895]: CRED_ACQ pid=8895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:08.724728 kernel: audit: type=1103 audit(1768874768.615:1275): pid=8895 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:08.784807 kernel: audit: type=1006 audit(1768874768.615:1276): pid=8895 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=62 res=1 Jan 20 02:06:08.798280 kernel: audit: type=1300 audit(1768874768.615:1276): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff44ad0680 a2=3 a3=0 items=0 ppid=1 pid=8895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:06:08.615000 audit[8895]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff44ad0680 a2=3 a3=0 items=0 ppid=1 pid=8895 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:06:08.822019 systemd-logind[1624]: New session 62 of user core. Jan 20 02:06:08.902999 kernel: audit: type=1327 audit(1768874768.615:1276): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:06:08.615000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:06:08.928866 systemd[1]: Started session-62.scope - Session 62 of User core. Jan 20 02:06:08.965000 audit[8895]: USER_START pid=8895 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:09.010422 kernel: audit: type=1105 audit(1768874768.965:1277): pid=8895 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:08.987000 audit[8899]: CRED_ACQ pid=8899 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:09.071592 kernel: audit: type=1103 audit(1768874768.987:1278): pid=8899 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:10.587876 kubelet[3041]: E0120 02:06:10.584856 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6576c69f97-s8m7m" podUID="15d966be-bae7-42a3-83b7-ced10b64bcb2" Jan 20 02:06:11.268269 sshd[8899]: Connection closed by 10.0.0.1 port 43504 Jan 20 02:06:11.297939 sshd-session[8895]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:11.328000 audit[8895]: USER_END pid=8895 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:11.407551 kernel: audit: type=1106 audit(1768874771.328:1279): pid=8895 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:11.328000 audit[8895]: CRED_DISP pid=8895 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:11.423938 systemd[1]: sshd@61-10.0.0.48:22-10.0.0.1:43504.service: Deactivated successfully. Jan 20 02:06:11.447879 systemd[1]: session-62.scope: Deactivated successfully. Jan 20 02:06:11.456694 systemd-logind[1624]: Session 62 logged out. Waiting for processes to exit. Jan 20 02:06:11.466728 kernel: audit: type=1104 audit(1768874771.328:1280): pid=8895 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:11.458430 systemd-logind[1624]: Removed session 62. Jan 20 02:06:11.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-10.0.0.48:22-10.0.0.1:43504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:06:13.529425 kubelet[3041]: E0120 02:06:13.528992 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:13.562544 kubelet[3041]: E0120 02:06:13.562413 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cb6ddc686-fcv7l" podUID="ebbd5234-bb75-4e8e-91be-76d2ac5f3ae5" Jan 20 02:06:13.577266 kubelet[3041]: E0120 02:06:13.576035 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776d4dc5d4-8w9l5" podUID="57304ae8-4142-4837-ab19-941e654eb081" Jan 20 02:06:15.546852 kubelet[3041]: E0120 02:06:15.546805 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:16.246644 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 02:06:16.246818 kernel: audit: type=1130 audit(1768874776.233:1282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.48:22-10.0.0.1:58372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:06:16.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.48:22-10.0.0.1:58372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:06:16.234706 systemd[1]: Started sshd@62-10.0.0.48:22-10.0.0.1:58372.service - OpenSSH per-connection server daemon (10.0.0.1:58372). Jan 20 02:06:16.560285 kubelet[3041]: E0120 02:06:16.540548 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:16.712000 audit[8913]: USER_ACCT pid=8913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:16.718170 sshd[8913]: Accepted publickey for core from 10.0.0.1 port 58372 ssh2: RSA SHA256:55ed14JnR+DgK03phSOwTmwHo5D9+7tUSQY73fphdAc Jan 20 02:06:16.732492 sshd-session[8913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 02:06:16.771708 kernel: audit: type=1101 audit(1768874776.712:1283): pid=8913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:16.730000 audit[8913]: CRED_ACQ pid=8913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:16.794243 systemd-logind[1624]: New session 63 of user core. Jan 20 02:06:16.883479 kernel: audit: type=1103 audit(1768874776.730:1284): pid=8913 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:16.883727 kernel: audit: type=1006 audit(1768874776.730:1285): pid=8913 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=63 res=1 Jan 20 02:06:16.943455 kernel: audit: type=1300 audit(1768874776.730:1285): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc77b6bf60 a2=3 a3=0 items=0 ppid=1 pid=8913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:06:16.730000 audit[8913]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc77b6bf60 a2=3 a3=0 items=0 ppid=1 pid=8913 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 02:06:16.966207 systemd[1]: Started session-63.scope - Session 63 of User core. Jan 20 02:06:16.730000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:06:17.034036 kernel: audit: type=1327 audit(1768874776.730:1285): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 02:06:17.034225 kernel: audit: type=1105 audit(1768874777.009:1286): pid=8913 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:17.009000 audit[8913]: USER_START pid=8913 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:17.031000 audit[8916]: CRED_ACQ pid=8916 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:17.194449 kernel: audit: type=1103 audit(1768874777.031:1287): pid=8916 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:17.991042 sshd[8916]: Connection closed by 10.0.0.1 port 58372 Jan 20 02:06:17.989475 sshd-session[8913]: pam_unix(sshd:session): session closed for user core Jan 20 02:06:17.990000 audit[8913]: USER_END pid=8913 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:18.008952 systemd-logind[1624]: Session 63 logged out. Waiting for processes to exit. Jan 20 02:06:18.009130 systemd[1]: sshd@62-10.0.0.48:22-10.0.0.1:58372.service: Deactivated successfully. Jan 20 02:06:18.105496 kernel: audit: type=1106 audit(1768874777.990:1288): pid=8913 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:18.105673 kernel: audit: type=1104 audit(1768874777.991:1289): pid=8913 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:17.991000 audit[8913]: CRED_DISP pid=8913 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jan 20 02:06:18.045309 systemd[1]: session-63.scope: Deactivated successfully. Jan 20 02:06:18.071099 systemd-logind[1624]: Removed session 63. Jan 20 02:06:18.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-10.0.0.48:22-10.0.0.1:58372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 02:06:18.533308 kubelet[3041]: E0120 02:06:18.533230 3041 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 20 02:06:19.762951 kubelet[3041]: E0120 02:06:19.761464 3041 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-pxsrr" podUID="738fa74e-ddb6-4c59-8db5-d8c8658e06b6"