Apr 16 02:35:56.769299 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:39:17 -00 2026 Apr 16 02:35:56.769315 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:35:56.769322 kernel: BIOS-provided physical RAM map: Apr 16 02:35:56.769329 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 02:35:56.769333 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 16 02:35:56.769337 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 16 02:35:56.769342 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 16 02:35:56.769347 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 16 02:35:56.769351 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 16 02:35:56.769356 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 16 02:35:56.769360 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 16 02:35:56.769364 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 16 02:35:56.769370 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 16 02:35:56.769374 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 16 02:35:56.769380 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 16 02:35:56.769385 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 16 02:35:56.769389 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 16 02:35:56.769395 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 16 02:35:56.769416 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 16 02:35:56.769421 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 16 02:35:56.769426 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 16 02:35:56.769430 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 16 02:35:56.769435 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 02:35:56.769439 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 02:35:56.769444 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 02:35:56.769449 kernel: NX (Execute Disable) protection: active Apr 16 02:35:56.769453 kernel: APIC: Static calls initialized Apr 16 02:35:56.769458 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 16 02:35:56.769464 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 16 02:35:56.769469 kernel: extended physical RAM map: Apr 16 02:35:56.769473 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 16 02:35:56.769478 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 16 02:35:56.769483 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 16 02:35:56.769488 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 16 02:35:56.769492 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 16 02:35:56.769497 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 16 02:35:56.769501 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 16 02:35:56.769506 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 16 02:35:56.769510 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 16 02:35:56.769517 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 16 02:35:56.769524 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 16 02:35:56.769529 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 16 02:35:56.769533 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 16 02:35:56.769538 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 16 02:35:56.769544 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 16 02:35:56.769549 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 16 02:35:56.769554 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 16 02:35:56.769559 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 16 02:35:56.769564 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 16 02:35:56.769569 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 16 02:35:56.769573 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 16 02:35:56.769578 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 16 02:35:56.769583 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 16 02:35:56.769588 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 16 02:35:56.769593 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 02:35:56.769599 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 16 02:35:56.769604 kernel: efi: EFI v2.7 by EDK II Apr 16 02:35:56.769609 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 16 02:35:56.769614 kernel: random: crng init done Apr 16 02:35:56.769619 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 16 02:35:56.769624 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 16 02:35:56.769629 kernel: secureboot: Secure boot disabled Apr 16 02:35:56.769634 kernel: SMBIOS 2.8 present. Apr 16 02:35:56.769639 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 16 02:35:56.769644 kernel: DMI: Memory slots populated: 1/1 Apr 16 02:35:56.769649 kernel: Hypervisor detected: KVM Apr 16 02:35:56.769654 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 16 02:35:56.769659 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 02:35:56.769664 kernel: kvm-clock: using sched offset of 4058255824 cycles Apr 16 02:35:56.769670 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 02:35:56.769675 kernel: tsc: Detected 2793.438 MHz processor Apr 16 02:35:56.769680 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 02:35:56.769686 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 02:35:56.769690 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 16 02:35:56.769696 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 16 02:35:56.769701 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 02:35:56.769707 kernel: Using GB pages for direct mapping Apr 16 02:35:56.769712 kernel: ACPI: Early table checksum verification disabled Apr 16 02:35:56.769717 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 16 02:35:56.769722 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 16 02:35:56.769728 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:35:56.769733 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:35:56.769738 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 16 02:35:56.769743 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:35:56.769748 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:35:56.769754 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:35:56.769759 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 02:35:56.769764 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 16 02:35:56.769769 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 16 02:35:56.769774 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 16 02:35:56.769779 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 16 02:35:56.769784 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 16 02:35:56.769789 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 16 02:35:56.769794 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 16 02:35:56.769800 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 16 02:35:56.769805 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 16 02:35:56.769810 kernel: No NUMA configuration found Apr 16 02:35:56.769815 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 16 02:35:56.769820 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 16 02:35:56.769825 kernel: Zone ranges: Apr 16 02:35:56.769830 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 02:35:56.769835 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 16 02:35:56.769840 kernel: Normal empty Apr 16 02:35:56.769845 kernel: Device empty Apr 16 02:35:56.769851 kernel: Movable zone start for each node Apr 16 02:35:56.769856 kernel: Early memory node ranges Apr 16 02:35:56.769861 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 16 02:35:56.769866 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 16 02:35:56.769871 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 16 02:35:56.769876 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 16 02:35:56.769881 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 16 02:35:56.769886 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 16 02:35:56.769891 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 16 02:35:56.769897 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 16 02:35:56.769902 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 16 02:35:56.769907 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 02:35:56.769930 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 16 02:35:56.769936 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 16 02:35:56.769945 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 02:35:56.769952 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 16 02:35:56.769958 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 16 02:35:56.769963 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 16 02:35:56.769969 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 16 02:35:56.769974 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 16 02:35:56.769980 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 02:35:56.769987 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 02:35:56.769992 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 02:35:56.769998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 02:35:56.770004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 02:35:56.770009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 02:35:56.770016 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 02:35:56.770021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 02:35:56.770027 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 02:35:56.770032 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 02:35:56.770038 kernel: TSC deadline timer available Apr 16 02:35:56.770043 kernel: CPU topo: Max. logical packages: 1 Apr 16 02:35:56.770049 kernel: CPU topo: Max. logical dies: 1 Apr 16 02:35:56.770054 kernel: CPU topo: Max. dies per package: 1 Apr 16 02:35:56.770060 kernel: CPU topo: Max. threads per core: 1 Apr 16 02:35:56.770066 kernel: CPU topo: Num. cores per package: 4 Apr 16 02:35:56.770072 kernel: CPU topo: Num. threads per package: 4 Apr 16 02:35:56.770078 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 16 02:35:56.770083 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 02:35:56.770089 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 02:35:56.770094 kernel: kvm-guest: setup PV sched yield Apr 16 02:35:56.770100 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 16 02:35:56.770106 kernel: Booting paravirtualized kernel on KVM Apr 16 02:35:56.770111 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 02:35:56.770117 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 02:35:56.770124 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 16 02:35:56.770129 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 16 02:35:56.770135 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 02:35:56.770140 kernel: kvm-guest: PV spinlocks enabled Apr 16 02:35:56.770146 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 02:35:56.770152 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:35:56.770158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 02:35:56.770164 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 02:35:56.770170 kernel: Fallback order for Node 0: 0 Apr 16 02:35:56.770176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 16 02:35:56.770182 kernel: Policy zone: DMA32 Apr 16 02:35:56.770187 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 02:35:56.770193 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 02:35:56.770198 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 02:35:56.770204 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 02:35:56.770210 kernel: Dynamic Preempt: voluntary Apr 16 02:35:56.770215 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 02:35:56.770225 kernel: rcu: RCU event tracing is enabled. Apr 16 02:35:56.770231 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 02:35:56.770236 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 02:35:56.770242 kernel: Rude variant of Tasks RCU enabled. Apr 16 02:35:56.770248 kernel: Tracing variant of Tasks RCU enabled. Apr 16 02:35:56.770253 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 02:35:56.770259 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 02:35:56.770264 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:35:56.770270 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:35:56.770277 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 02:35:56.770283 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 02:35:56.770288 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 02:35:56.770294 kernel: Console: colour dummy device 80x25 Apr 16 02:35:56.770299 kernel: printk: legacy console [ttyS0] enabled Apr 16 02:35:56.770305 kernel: ACPI: Core revision 20240827 Apr 16 02:35:56.770311 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 02:35:56.770317 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 02:35:56.770322 kernel: x2apic enabled Apr 16 02:35:56.770329 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 02:35:56.770335 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 02:35:56.770341 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 02:35:56.770346 kernel: kvm-guest: setup PV IPIs Apr 16 02:35:56.770352 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 02:35:56.770358 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:35:56.770364 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 02:35:56.770369 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 02:35:56.770375 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 02:35:56.770382 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 02:35:56.770387 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 02:35:56.770393 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 02:35:56.770410 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 02:35:56.770416 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 02:35:56.770421 kernel: RETBleed: Vulnerable Apr 16 02:35:56.770427 kernel: Speculative Store Bypass: Vulnerable Apr 16 02:35:56.770432 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 02:35:56.770439 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 02:35:56.770445 kernel: active return thunk: its_return_thunk Apr 16 02:35:56.770450 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 02:35:56.770456 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 02:35:56.770462 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 02:35:56.770467 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 02:35:56.770473 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 02:35:56.770478 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 02:35:56.770484 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 02:35:56.770491 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 02:35:56.770496 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 02:35:56.770502 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 02:35:56.770507 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 02:35:56.770513 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 02:35:56.770518 kernel: Freeing SMP alternatives memory: 32K Apr 16 02:35:56.770524 kernel: pid_max: default: 32768 minimum: 301 Apr 16 02:35:56.770530 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 02:35:56.770535 kernel: landlock: Up and running. Apr 16 02:35:56.770542 kernel: SELinux: Initializing. Apr 16 02:35:56.770547 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:35:56.770553 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 02:35:56.770559 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 02:35:56.770565 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 02:35:56.770570 kernel: signal: max sigframe size: 3632 Apr 16 02:35:56.770576 kernel: rcu: Hierarchical SRCU implementation. Apr 16 02:35:56.770581 kernel: rcu: Max phase no-delay instances is 400. Apr 16 02:35:56.770587 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 02:35:56.770594 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 02:35:56.770599 kernel: smp: Bringing up secondary CPUs ... Apr 16 02:35:56.770605 kernel: smpboot: x86: Booting SMP configuration: Apr 16 02:35:56.770610 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 02:35:56.770616 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 02:35:56.770622 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 02:35:56.770628 kernel: Memory: 2374700K/2565800K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 185212K reserved, 0K cma-reserved) Apr 16 02:35:56.770633 kernel: devtmpfs: initialized Apr 16 02:35:56.770639 kernel: x86/mm: Memory block size: 128MB Apr 16 02:35:56.770646 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 16 02:35:56.770651 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 16 02:35:56.770657 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 16 02:35:56.770662 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 16 02:35:56.770668 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 16 02:35:56.770674 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 16 02:35:56.770679 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 02:35:56.770685 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 02:35:56.770690 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 02:35:56.770697 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 02:35:56.770702 kernel: audit: initializing netlink subsys (disabled) Apr 16 02:35:56.770708 kernel: audit: type=2000 audit(1776306954.858:1): state=initialized audit_enabled=0 res=1 Apr 16 02:35:56.770713 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 02:35:56.770719 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 02:35:56.770725 kernel: cpuidle: using governor menu Apr 16 02:35:56.770730 kernel: efi: Freeing EFI boot services memory: 38812K Apr 16 02:35:56.770736 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 02:35:56.770741 kernel: dca service started, version 1.12.1 Apr 16 02:35:56.770748 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 16 02:35:56.770767 kernel: PCI: Using configuration type 1 for base access Apr 16 02:35:56.770772 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 02:35:56.770778 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 02:35:56.770784 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 02:35:56.770789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 02:35:56.770795 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 02:35:56.770800 kernel: ACPI: Added _OSI(Module Device) Apr 16 02:35:56.770806 kernel: ACPI: Added _OSI(Processor Device) Apr 16 02:35:56.770813 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 02:35:56.770818 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 02:35:56.770824 kernel: ACPI: Interpreter enabled Apr 16 02:35:56.770830 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 02:35:56.770835 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 02:35:56.770841 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 02:35:56.770846 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 02:35:56.770852 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 02:35:56.770857 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 02:35:56.770979 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 02:35:56.771039 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 02:35:56.771092 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 02:35:56.771099 kernel: PCI host bridge to bus 0000:00 Apr 16 02:35:56.771157 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 02:35:56.771205 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 02:35:56.771253 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 02:35:56.771299 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 16 02:35:56.771344 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 16 02:35:56.771389 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 16 02:35:56.771466 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 02:35:56.771534 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 02:35:56.771593 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 16 02:35:56.771651 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 16 02:35:56.771703 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 16 02:35:56.771755 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 16 02:35:56.771829 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 02:35:56.772054 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 16 02:35:56.772113 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 16 02:35:56.772168 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 16 02:35:56.772220 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 16 02:35:56.772277 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 16 02:35:56.772330 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 16 02:35:56.772383 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 16 02:35:56.772452 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 16 02:35:56.772512 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 16 02:35:56.772566 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 16 02:35:56.772687 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 16 02:35:56.772742 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 16 02:35:56.772827 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 16 02:35:56.772907 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 02:35:56.772984 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 02:35:56.773041 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 02:35:56.773096 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 16 02:35:56.773147 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 16 02:35:56.773207 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 02:35:56.773258 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 16 02:35:56.773265 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 02:35:56.773271 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 02:35:56.773277 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 02:35:56.773284 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 02:35:56.773290 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 02:35:56.773296 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 02:35:56.773301 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 02:35:56.773307 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 02:35:56.773312 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 02:35:56.773318 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 02:35:56.773323 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 02:35:56.773329 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 02:35:56.773336 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 02:35:56.773341 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 02:35:56.773347 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 02:35:56.773352 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 02:35:56.773358 kernel: iommu: Default domain type: Translated Apr 16 02:35:56.773364 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 02:35:56.773369 kernel: efivars: Registered efivars operations Apr 16 02:35:56.773375 kernel: PCI: Using ACPI for IRQ routing Apr 16 02:35:56.773381 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 02:35:56.773387 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 16 02:35:56.773393 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 16 02:35:56.773413 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 16 02:35:56.773419 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 16 02:35:56.773424 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 16 02:35:56.773429 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 16 02:35:56.773435 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 16 02:35:56.773440 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 16 02:35:56.773493 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 02:35:56.773546 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 02:35:56.773597 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 02:35:56.773604 kernel: vgaarb: loaded Apr 16 02:35:56.773610 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 02:35:56.773615 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 02:35:56.773621 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 02:35:56.773627 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 02:35:56.773632 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 02:35:56.773640 kernel: pnp: PnP ACPI init Apr 16 02:35:56.773696 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 16 02:35:56.773704 kernel: pnp: PnP ACPI: found 6 devices Apr 16 02:35:56.773710 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 02:35:56.773726 kernel: NET: Registered PF_INET protocol family Apr 16 02:35:56.773733 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 02:35:56.773738 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 02:35:56.773755 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 02:35:56.773763 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 02:35:56.773777 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 02:35:56.773791 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 02:35:56.773797 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:35:56.773803 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 02:35:56.773809 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 02:35:56.773815 kernel: NET: Registered PF_XDP protocol family Apr 16 02:35:56.773880 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 16 02:35:56.773955 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 16 02:35:56.774008 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 02:35:56.774055 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 02:35:56.774102 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 02:35:56.774151 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 16 02:35:56.774196 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 16 02:35:56.774242 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 16 02:35:56.774250 kernel: PCI: CLS 0 bytes, default 64 Apr 16 02:35:56.774256 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 02:35:56.774263 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 02:35:56.774269 kernel: Initialise system trusted keyrings Apr 16 02:35:56.774276 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 02:35:56.774282 kernel: Key type asymmetric registered Apr 16 02:35:56.774288 kernel: Asymmetric key parser 'x509' registered Apr 16 02:35:56.774295 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 02:35:56.774301 kernel: io scheduler mq-deadline registered Apr 16 02:35:56.774307 kernel: io scheduler kyber registered Apr 16 02:35:56.774313 kernel: io scheduler bfq registered Apr 16 02:35:56.774319 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 02:35:56.774325 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 02:35:56.774331 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 02:35:56.774337 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 02:35:56.774343 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 02:35:56.774350 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 02:35:56.774356 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 02:35:56.774362 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 02:35:56.774367 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 02:35:56.774374 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 02:35:56.774443 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 02:35:56.774505 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 02:35:56.774553 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T02:35:56 UTC (1776306956) Apr 16 02:35:56.774603 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 16 02:35:56.774610 kernel: intel_pstate: CPU model not supported Apr 16 02:35:56.774616 kernel: efifb: probing for efifb Apr 16 02:35:56.774622 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 16 02:35:56.774628 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 16 02:35:56.774634 kernel: efifb: scrolling: redraw Apr 16 02:35:56.774640 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 16 02:35:56.774646 kernel: Console: switching to colour frame buffer device 160x50 Apr 16 02:35:56.774652 kernel: fb0: EFI VGA frame buffer device Apr 16 02:35:56.774659 kernel: pstore: Using crash dump compression: deflate Apr 16 02:35:56.774665 kernel: pstore: Registered efi_pstore as persistent store backend Apr 16 02:35:56.774671 kernel: NET: Registered PF_INET6 protocol family Apr 16 02:35:56.774677 kernel: Segment Routing with IPv6 Apr 16 02:35:56.774682 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 02:35:56.774688 kernel: NET: Registered PF_PACKET protocol family Apr 16 02:35:56.774694 kernel: Key type dns_resolver registered Apr 16 02:35:56.774700 kernel: IPI shorthand broadcast: enabled Apr 16 02:35:56.774706 kernel: sched_clock: Marking stable (2605007380, 274018769)->(2926327732, -47301583) Apr 16 02:35:56.774712 kernel: registered taskstats version 1 Apr 16 02:35:56.774719 kernel: Loading compiled-in X.509 certificates Apr 16 02:35:56.774725 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 25c2b596b475a2918f2ba6f953b0a89c09a0d0ab' Apr 16 02:35:56.774731 kernel: Demotion targets for Node 0: null Apr 16 02:35:56.774737 kernel: Key type .fscrypt registered Apr 16 02:35:56.774742 kernel: Key type fscrypt-provisioning registered Apr 16 02:35:56.774748 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 02:35:56.774754 kernel: ima: Allocated hash algorithm: sha1 Apr 16 02:35:56.774760 kernel: ima: No architecture policies found Apr 16 02:35:56.774766 kernel: clk: Disabling unused clocks Apr 16 02:35:56.774773 kernel: Warning: unable to open an initial console. Apr 16 02:35:56.774779 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 16 02:35:56.774784 kernel: Write protecting the kernel read-only data: 40960k Apr 16 02:35:56.774790 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 02:35:56.774796 kernel: Run /init as init process Apr 16 02:35:56.774802 kernel: with arguments: Apr 16 02:35:56.774808 kernel: /init Apr 16 02:35:56.774813 kernel: with environment: Apr 16 02:35:56.774819 kernel: HOME=/ Apr 16 02:35:56.774826 kernel: TERM=linux Apr 16 02:35:56.774833 systemd[1]: Successfully made /usr/ read-only. Apr 16 02:35:56.774841 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 02:35:56.774848 systemd[1]: Detected virtualization kvm. Apr 16 02:35:56.774854 systemd[1]: Detected architecture x86-64. Apr 16 02:35:56.774860 systemd[1]: Running in initrd. Apr 16 02:35:56.774866 systemd[1]: No hostname configured, using default hostname. Apr 16 02:35:56.774874 systemd[1]: Hostname set to . Apr 16 02:35:56.774880 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:35:56.774886 systemd[1]: Queued start job for default target initrd.target. Apr 16 02:35:56.774892 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:35:56.774898 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:35:56.774905 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 02:35:56.774929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:35:56.774936 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 02:35:56.774944 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 02:35:56.774951 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 02:35:56.774957 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 02:35:56.774963 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:35:56.774971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:35:56.774977 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:35:56.774983 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:35:56.774990 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:35:56.774997 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:35:56.775003 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:35:56.775009 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:35:56.775015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 02:35:56.775021 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 02:35:56.775027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:35:56.775033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:35:56.775039 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:35:56.775046 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:35:56.775052 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 02:35:56.775058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:35:56.775064 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 02:35:56.775071 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 02:35:56.775077 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 02:35:56.775083 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:35:56.775089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:35:56.775096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:35:56.775102 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 02:35:56.775109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:35:56.775129 systemd-journald[200]: Collecting audit messages is disabled. Apr 16 02:35:56.775145 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 02:35:56.775152 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 02:35:56.775159 systemd-journald[200]: Journal started Apr 16 02:35:56.775176 systemd-journald[200]: Runtime Journal (/run/log/journal/cac61ac62076472f8ee56a6d4c9fbf97) is 6M, max 48.1M, 42.1M free. Apr 16 02:35:56.772880 systemd-modules-load[202]: Inserted module 'overlay' Apr 16 02:35:56.778935 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:35:56.779465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:35:56.786836 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 02:35:56.790007 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:35:56.793725 systemd-tmpfiles[214]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 02:35:56.796473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:35:56.797463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:35:56.798611 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 02:35:56.803276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:35:56.810940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 02:35:56.813224 systemd-modules-load[202]: Inserted module 'br_netfilter' Apr 16 02:35:56.813758 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:35:56.814244 kernel: Bridge firewalling registered Apr 16 02:35:56.814819 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:35:56.824688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:35:56.828578 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 02:35:56.838281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:35:56.840191 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:35:56.850658 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 02:35:56.864524 systemd-resolved[242]: Positive Trust Anchors: Apr 16 02:35:56.864543 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:35:56.864568 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:35:56.866348 systemd-resolved[242]: Defaulting to hostname 'linux'. Apr 16 02:35:56.866999 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:35:56.868378 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:35:56.930964 kernel: SCSI subsystem initialized Apr 16 02:35:56.938952 kernel: Loading iSCSI transport class v2.0-870. Apr 16 02:35:56.949040 kernel: iscsi: registered transport (tcp) Apr 16 02:35:56.966210 kernel: iscsi: registered transport (qla4xxx) Apr 16 02:35:56.966239 kernel: QLogic iSCSI HBA Driver Apr 16 02:35:56.980620 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 02:35:56.997300 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:35:57.000063 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 02:35:57.032642 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 02:35:57.035071 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 02:35:57.085943 kernel: raid6: avx512x4 gen() 45971 MB/s Apr 16 02:35:57.102939 kernel: raid6: avx512x2 gen() 44833 MB/s Apr 16 02:35:57.119940 kernel: raid6: avx512x1 gen() 44947 MB/s Apr 16 02:35:57.136934 kernel: raid6: avx2x4 gen() 37754 MB/s Apr 16 02:35:57.153937 kernel: raid6: avx2x2 gen() 37762 MB/s Apr 16 02:35:57.171447 kernel: raid6: avx2x1 gen() 28860 MB/s Apr 16 02:35:57.171468 kernel: raid6: using algorithm avx512x4 gen() 45971 MB/s Apr 16 02:35:57.189435 kernel: raid6: .... xor() 10294 MB/s, rmw enabled Apr 16 02:35:57.189448 kernel: raid6: using avx512x2 recovery algorithm Apr 16 02:35:57.206950 kernel: xor: automatically using best checksumming function avx Apr 16 02:35:57.336175 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 02:35:57.342097 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:35:57.344650 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:35:57.370690 systemd-udevd[453]: Using default interface naming scheme 'v255'. Apr 16 02:35:57.373952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:35:57.377895 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 02:35:57.398747 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Apr 16 02:35:57.417724 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:35:57.420183 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:35:57.454870 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:35:57.456604 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 02:35:57.479941 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 02:35:57.484391 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 02:35:57.490948 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 02:35:57.494735 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 02:35:57.494761 kernel: GPT:9289727 != 19775487 Apr 16 02:35:57.494770 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 02:35:57.494778 kernel: GPT:9289727 != 19775487 Apr 16 02:35:57.495287 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 02:35:57.496787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:35:57.502951 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 16 02:35:57.503180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:35:57.503268 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:35:57.507713 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:35:57.510896 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:35:57.517949 kernel: libata version 3.00 loaded. Apr 16 02:35:57.530972 kernel: AES CTR mode by8 optimization enabled Apr 16 02:35:57.531888 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 02:35:57.538395 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 02:35:57.546291 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 02:35:57.546436 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 02:35:57.550497 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 02:35:57.550605 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 02:35:57.550673 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 02:35:57.556942 kernel: scsi host0: ahci Apr 16 02:35:57.558090 kernel: scsi host1: ahci Apr 16 02:35:57.558133 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 02:35:57.572012 kernel: scsi host2: ahci Apr 16 02:35:57.572120 kernel: scsi host3: ahci Apr 16 02:35:57.572190 kernel: scsi host4: ahci Apr 16 02:35:57.572256 kernel: scsi host5: ahci Apr 16 02:35:57.572323 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Apr 16 02:35:57.572331 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Apr 16 02:35:57.572339 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Apr 16 02:35:57.572346 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Apr 16 02:35:57.572355 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Apr 16 02:35:57.572362 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Apr 16 02:35:57.562451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 02:35:57.580278 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:35:57.583558 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 02:35:57.584286 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:35:57.584321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:35:57.589050 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:35:57.600342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:35:57.605535 disk-uuid[644]: Primary Header is updated. Apr 16 02:35:57.605535 disk-uuid[644]: Secondary Entries is updated. Apr 16 02:35:57.605535 disk-uuid[644]: Secondary Header is updated. Apr 16 02:35:57.609312 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:35:57.611939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:35:57.626585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:35:57.884947 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 02:35:57.885009 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 02:35:57.885950 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 02:35:57.887952 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 02:35:57.888949 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 02:35:57.889953 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 02:35:57.891646 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 02:35:57.891658 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 02:35:57.891667 kernel: ata3.00: applying bridge limits Apr 16 02:35:57.893283 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 02:35:57.893294 kernel: ata3.00: configured for UDMA/100 Apr 16 02:35:57.896049 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 02:35:57.935168 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 02:35:57.935305 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 02:35:57.948954 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 02:35:58.218186 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 02:35:58.219121 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:35:58.219436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:35:58.224255 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:35:58.227899 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 02:35:58.246300 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:35:58.613959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 02:35:58.614205 disk-uuid[646]: The operation has completed successfully. Apr 16 02:35:58.635333 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 02:35:58.635435 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 02:35:58.659347 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 02:35:58.678794 sh[680]: Success Apr 16 02:35:58.694049 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 02:35:58.694078 kernel: device-mapper: uevent: version 1.0.3 Apr 16 02:35:58.695431 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 02:35:58.702948 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 02:35:58.723222 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 02:35:58.727115 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 02:35:58.740532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 02:35:58.747830 kernel: BTRFS: device fsid 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (692) Apr 16 02:35:58.747854 kernel: BTRFS info (device dm-0): first mount of filesystem 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 Apr 16 02:35:58.747863 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:35:58.753658 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 02:35:58.753678 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 02:35:58.754584 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 02:35:58.757045 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 02:35:58.759937 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 02:35:58.763275 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 02:35:58.777151 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 02:35:58.791939 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (723) Apr 16 02:35:58.791961 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:35:58.794126 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:35:58.797381 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:35:58.797410 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:35:58.800935 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:35:58.801555 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 02:35:58.804832 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 02:35:58.864891 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:35:58.869637 ignition[776]: Ignition 2.22.0 Apr 16 02:35:58.869656 ignition[776]: Stage: fetch-offline Apr 16 02:35:58.869856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:35:58.869679 ignition[776]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:35:58.869684 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:35:58.869740 ignition[776]: parsed url from cmdline: "" Apr 16 02:35:58.869742 ignition[776]: no config URL provided Apr 16 02:35:58.869745 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 02:35:58.869750 ignition[776]: no config at "/usr/lib/ignition/user.ign" Apr 16 02:35:58.869763 ignition[776]: op(1): [started] loading QEMU firmware config module Apr 16 02:35:58.869766 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 02:35:58.878294 ignition[776]: op(1): [finished] loading QEMU firmware config module Apr 16 02:35:58.899747 systemd-networkd[867]: lo: Link UP Apr 16 02:35:58.899765 systemd-networkd[867]: lo: Gained carrier Apr 16 02:35:58.900599 systemd-networkd[867]: Enumeration completed Apr 16 02:35:58.900969 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:35:58.902018 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:35:58.902022 systemd-networkd[867]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:35:58.902989 systemd-networkd[867]: eth0: Link UP Apr 16 02:35:58.903068 systemd-networkd[867]: eth0: Gained carrier Apr 16 02:35:58.903076 systemd-networkd[867]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:35:58.903748 systemd[1]: Reached target network.target - Network. Apr 16 02:35:58.916949 systemd-networkd[867]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:35:58.982809 ignition[776]: parsing config with SHA512: 1f2a796b48b49d7a77c98d2496113e3ad5518c1e0211a971dabd853ed3402828b120a8101f2433e8b2a062f1aedcbcce0626adfe13c7254a2f4b698e0526b033 Apr 16 02:35:58.986370 unknown[776]: fetched base config from "system" Apr 16 02:35:58.986383 unknown[776]: fetched user config from "qemu" Apr 16 02:35:58.986673 ignition[776]: fetch-offline: fetch-offline passed Apr 16 02:35:58.989082 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:35:58.986719 ignition[776]: Ignition finished successfully Apr 16 02:35:58.991286 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 02:35:58.991937 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 02:35:59.020145 ignition[875]: Ignition 2.22.0 Apr 16 02:35:59.020162 ignition[875]: Stage: kargs Apr 16 02:35:59.020260 ignition[875]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:35:59.020266 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:35:59.022268 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 02:35:59.020826 ignition[875]: kargs: kargs passed Apr 16 02:35:59.025213 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 02:35:59.020851 ignition[875]: Ignition finished successfully Apr 16 02:35:59.048557 ignition[883]: Ignition 2.22.0 Apr 16 02:35:59.048577 ignition[883]: Stage: disks Apr 16 02:35:59.048665 ignition[883]: no configs at "/usr/lib/ignition/base.d" Apr 16 02:35:59.048671 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:35:59.049168 ignition[883]: disks: disks passed Apr 16 02:35:59.051611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 02:35:59.049192 ignition[883]: Ignition finished successfully Apr 16 02:35:59.054084 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 02:35:59.054457 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 02:35:59.057433 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:35:59.059805 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:35:59.062309 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:35:59.065789 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 02:35:59.088272 systemd-fsck[893]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 02:35:59.092231 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 02:35:59.096093 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 02:35:59.189938 kernel: EXT4-fs (vda9): mounted filesystem 75cd5b5e-229f-474b-8de5-870bc4bccaf1 r/w with ordered data mode. Quota mode: none. Apr 16 02:35:59.189997 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 02:35:59.190862 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 02:35:59.194866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:35:59.196063 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 02:35:59.197615 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 02:35:59.197643 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 02:35:59.197658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:35:59.208994 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 02:35:59.210344 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 02:35:59.220128 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (901) Apr 16 02:35:59.223026 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:35:59.223048 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:35:59.226377 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:35:59.226401 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:35:59.227479 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:35:59.237187 initrd-setup-root[925]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 02:35:59.241229 initrd-setup-root[932]: cut: /sysroot/etc/group: No such file or directory Apr 16 02:35:59.245095 initrd-setup-root[939]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 02:35:59.249006 initrd-setup-root[946]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 02:35:59.309587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 02:35:59.311175 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 02:35:59.313395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 02:35:59.326964 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:35:59.341249 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 02:35:59.354674 ignition[1014]: INFO : Ignition 2.22.0 Apr 16 02:35:59.354674 ignition[1014]: INFO : Stage: mount Apr 16 02:35:59.358467 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:35:59.358467 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:35:59.358467 ignition[1014]: INFO : mount: mount passed Apr 16 02:35:59.358467 ignition[1014]: INFO : Ignition finished successfully Apr 16 02:35:59.356308 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 02:35:59.359095 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 02:35:59.881220 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 02:35:59.882486 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 02:35:59.901108 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Apr 16 02:35:59.901133 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 02:35:59.902286 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 02:35:59.905581 kernel: BTRFS info (device vda6): turning on async discard Apr 16 02:35:59.905626 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 02:35:59.906771 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 02:35:59.943954 ignition[1044]: INFO : Ignition 2.22.0 Apr 16 02:35:59.943954 ignition[1044]: INFO : Stage: files Apr 16 02:35:59.946049 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:35:59.946049 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:35:59.949206 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Apr 16 02:35:59.950778 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 02:35:59.950778 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 02:35:59.955962 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 02:35:59.957787 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 02:35:59.959711 unknown[1044]: wrote ssh authorized keys file for user: core Apr 16 02:35:59.961071 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 02:35:59.962863 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:35:59.962863 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 02:36:00.015442 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 02:36:00.445226 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 02:36:00.445226 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:36:00.450104 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 02:36:00.581365 systemd-networkd[867]: eth0: Gained IPv6LL Apr 16 02:36:00.708160 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 02:36:00.909308 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 02:36:00.909308 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 02:36:00.914235 ignition[1044]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 02:36:00.939181 ignition[1044]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:36:00.942242 ignition[1044]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 02:36:00.944283 ignition[1044]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 02:36:00.944283 ignition[1044]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 02:36:00.944283 ignition[1044]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 02:36:00.944283 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:36:00.944283 ignition[1044]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 02:36:00.944283 ignition[1044]: INFO : files: files passed Apr 16 02:36:00.944283 ignition[1044]: INFO : Ignition finished successfully Apr 16 02:36:00.946588 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 02:36:00.948043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 02:36:00.952208 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 02:36:00.966784 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 02:36:00.966870 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 02:36:00.971335 initrd-setup-root-after-ignition[1073]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 02:36:00.974618 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:36:00.974618 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:36:00.978624 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 02:36:00.982060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:36:00.982728 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 02:36:00.987961 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 02:36:01.013057 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 02:36:01.013180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 02:36:01.014060 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 02:36:01.017783 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 02:36:01.020215 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 02:36:01.024053 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 02:36:01.051113 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:36:01.054807 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 02:36:01.072090 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:36:01.072709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:36:01.075572 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 02:36:01.079047 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 02:36:01.079136 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 02:36:01.082731 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 02:36:01.083460 systemd[1]: Stopped target basic.target - Basic System. Apr 16 02:36:01.087014 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 02:36:01.089363 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 02:36:01.091766 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 02:36:01.094555 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 02:36:01.097212 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 02:36:01.099869 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 02:36:01.102325 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 02:36:01.105682 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 02:36:01.110140 systemd[1]: Stopped target swap.target - Swaps. Apr 16 02:36:01.110698 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 02:36:01.110791 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 02:36:01.114939 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:36:01.117631 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:36:01.120285 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 02:36:01.120560 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:36:01.123246 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 02:36:01.123333 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 02:36:01.127370 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 02:36:01.127476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 02:36:01.130023 systemd[1]: Stopped target paths.target - Path Units. Apr 16 02:36:01.132219 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 02:36:01.138004 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:36:01.138642 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 02:36:01.142026 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 02:36:01.144328 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 02:36:01.144397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 02:36:01.146307 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 02:36:01.146364 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 02:36:01.148551 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 02:36:01.148646 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 02:36:01.150888 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 02:36:01.150988 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 02:36:01.158460 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 02:36:01.163154 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 02:36:01.163693 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 02:36:01.163774 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:36:01.167012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 02:36:01.167102 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 02:36:01.173737 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 02:36:01.173808 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 02:36:01.185682 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 02:36:01.188022 ignition[1099]: INFO : Ignition 2.22.0 Apr 16 02:36:01.188022 ignition[1099]: INFO : Stage: umount Apr 16 02:36:01.190082 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 02:36:01.190082 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 02:36:01.190082 ignition[1099]: INFO : umount: umount passed Apr 16 02:36:01.190082 ignition[1099]: INFO : Ignition finished successfully Apr 16 02:36:01.193338 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 02:36:01.193471 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 02:36:01.195081 systemd[1]: Stopped target network.target - Network. Apr 16 02:36:01.197767 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 02:36:01.197814 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 02:36:01.200190 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 02:36:01.200221 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 02:36:01.202703 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 02:36:01.202738 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 02:36:01.203419 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 02:36:01.203627 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 02:36:01.207878 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 02:36:01.208561 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 02:36:01.219832 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 02:36:01.219950 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 02:36:01.226522 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 02:36:01.228300 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 02:36:01.228377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 02:36:01.233258 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 02:36:01.233665 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 02:36:01.234525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 02:36:01.234558 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:36:01.239475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 02:36:01.242580 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 02:36:01.242624 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 02:36:01.245292 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 02:36:01.245323 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:36:01.249448 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 02:36:01.249482 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 02:36:01.251891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 02:36:01.251953 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:36:01.255063 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:36:01.257815 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 02:36:01.257856 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:36:01.258114 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 02:36:01.258183 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 02:36:01.266249 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 02:36:01.266288 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 02:36:01.273791 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 02:36:01.282041 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:36:01.285109 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 02:36:01.285157 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 02:36:01.287276 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 02:36:01.287295 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:36:01.289598 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 02:36:01.289628 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 02:36:01.294081 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 02:36:01.294115 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 02:36:01.297630 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 02:36:01.297664 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 02:36:01.304341 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 02:36:01.304859 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 02:36:01.304890 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:36:01.311684 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 02:36:01.311728 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:36:01.316180 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 02:36:01.316223 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 02:36:01.320896 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 02:36:01.320949 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:36:01.322597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 02:36:01.322628 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:36:01.326025 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 02:36:01.326061 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 16 02:36:01.326083 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 02:36:01.326108 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 02:36:01.326304 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 02:36:01.326380 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 02:36:01.328115 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 02:36:01.328183 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 02:36:01.330348 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 02:36:01.333290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 02:36:01.360071 systemd[1]: Switching root. Apr 16 02:36:01.389635 systemd-journald[200]: Journal stopped Apr 16 02:36:02.060882 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Apr 16 02:36:02.060953 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 02:36:02.060964 kernel: SELinux: policy capability open_perms=1 Apr 16 02:36:02.060975 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 02:36:02.060982 kernel: SELinux: policy capability always_check_network=0 Apr 16 02:36:02.060992 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 02:36:02.061000 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 02:36:02.061010 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 02:36:02.061017 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 02:36:02.061024 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 02:36:02.061032 kernel: audit: type=1403 audit(1776306961.506:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 02:36:02.061040 systemd[1]: Successfully loaded SELinux policy in 43.408ms. Apr 16 02:36:02.061056 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.965ms. Apr 16 02:36:02.061065 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 02:36:02.061074 systemd[1]: Detected virtualization kvm. Apr 16 02:36:02.061081 systemd[1]: Detected architecture x86-64. Apr 16 02:36:02.061089 systemd[1]: Detected first boot. Apr 16 02:36:02.061097 systemd[1]: Initializing machine ID from VM UUID. Apr 16 02:36:02.061107 kernel: Guest personality initialized and is inactive Apr 16 02:36:02.061114 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 02:36:02.061121 kernel: Initialized host personality Apr 16 02:36:02.061130 zram_generator::config[1144]: No configuration found. Apr 16 02:36:02.061140 kernel: NET: Registered PF_VSOCK protocol family Apr 16 02:36:02.061148 systemd[1]: Populated /etc with preset unit settings. Apr 16 02:36:02.061157 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 02:36:02.061165 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 02:36:02.061174 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 02:36:02.061184 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 02:36:02.061192 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 02:36:02.061201 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 02:36:02.061210 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 02:36:02.061218 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 02:36:02.061227 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 02:36:02.061234 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 02:36:02.061243 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 02:36:02.061252 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 02:36:02.061259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 02:36:02.061267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 02:36:02.061275 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 02:36:02.061283 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 02:36:02.061291 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 02:36:02.061299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 02:36:02.061309 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 02:36:02.061318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 02:36:02.061326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 02:36:02.061334 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 02:36:02.061343 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 02:36:02.061351 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 02:36:02.061358 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 02:36:02.061367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 02:36:02.061374 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 02:36:02.061382 systemd[1]: Reached target slices.target - Slice Units. Apr 16 02:36:02.061389 systemd[1]: Reached target swap.target - Swaps. Apr 16 02:36:02.061397 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 02:36:02.061404 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 02:36:02.061413 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 02:36:02.061421 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 02:36:02.061428 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 02:36:02.061448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 02:36:02.061457 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 02:36:02.061464 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 02:36:02.061472 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 02:36:02.061480 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 02:36:02.061487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:36:02.061496 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 02:36:02.061505 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 02:36:02.061512 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 02:36:02.061521 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 02:36:02.061529 systemd[1]: Reached target machines.target - Containers. Apr 16 02:36:02.061536 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 02:36:02.061544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:36:02.061552 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 02:36:02.061562 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 02:36:02.061570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:36:02.061577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:36:02.061585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:36:02.061593 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 02:36:02.061601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:36:02.061609 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 02:36:02.061618 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 02:36:02.061627 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 02:36:02.061634 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 02:36:02.061642 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 02:36:02.061650 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 02:36:02.061657 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 02:36:02.061665 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 02:36:02.061672 kernel: loop: module loaded Apr 16 02:36:02.061679 kernel: ACPI: bus type drm_connector registered Apr 16 02:36:02.061686 kernel: fuse: init (API version 7.41) Apr 16 02:36:02.061695 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 02:36:02.061703 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 02:36:02.061711 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 02:36:02.061718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 02:36:02.061729 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 02:36:02.061737 systemd[1]: Stopped verity-setup.service. Apr 16 02:36:02.061745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:36:02.061765 systemd-journald[1222]: Collecting audit messages is disabled. Apr 16 02:36:02.061784 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 02:36:02.061794 systemd-journald[1222]: Journal started Apr 16 02:36:02.061810 systemd-journald[1222]: Runtime Journal (/run/log/journal/cac61ac62076472f8ee56a6d4c9fbf97) is 6M, max 48.1M, 42.1M free. Apr 16 02:36:01.825649 systemd[1]: Queued start job for default target multi-user.target. Apr 16 02:36:01.835580 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 02:36:01.835977 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 02:36:02.064943 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 02:36:02.066110 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 02:36:02.067618 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 02:36:02.068964 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 02:36:02.070458 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 02:36:02.071955 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 02:36:02.073512 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 02:36:02.075298 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 02:36:02.077147 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 02:36:02.077285 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 02:36:02.079043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:36:02.079167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:36:02.080892 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:36:02.081097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:36:02.082636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:36:02.082766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:36:02.084514 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 02:36:02.084634 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 02:36:02.086191 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:36:02.086318 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:36:02.087981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 02:36:02.089622 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 02:36:02.091474 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 02:36:02.093338 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 02:36:02.095153 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 02:36:02.103582 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 02:36:02.105832 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 02:36:02.107868 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 02:36:02.109326 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 02:36:02.109469 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 02:36:02.111476 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 02:36:02.116519 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 02:36:02.118016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:36:02.118755 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 02:36:02.120782 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 02:36:02.121344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:36:02.121945 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 02:36:02.123676 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:36:02.124289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 02:36:02.127195 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 02:36:02.130012 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 02:36:02.135212 systemd-journald[1222]: Time spent on flushing to /var/log/journal/cac61ac62076472f8ee56a6d4c9fbf97 is 19.583ms for 1075 entries. Apr 16 02:36:02.135212 systemd-journald[1222]: System Journal (/var/log/journal/cac61ac62076472f8ee56a6d4c9fbf97) is 8M, max 195.6M, 187.6M free. Apr 16 02:36:02.163767 systemd-journald[1222]: Received client request to flush runtime journal. Apr 16 02:36:02.163801 kernel: loop0: detected capacity change from 0 to 219192 Apr 16 02:36:02.134374 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 02:36:02.136813 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 02:36:02.141106 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 02:36:02.144487 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 02:36:02.147039 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 02:36:02.159069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 02:36:02.165489 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 02:36:02.168451 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 16 02:36:02.168463 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 16 02:36:02.171515 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 02:36:02.177748 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 02:36:02.179962 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 02:36:02.182938 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 02:36:02.199024 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 02:36:02.199944 kernel: loop1: detected capacity change from 0 to 128560 Apr 16 02:36:02.202393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 02:36:02.216738 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Apr 16 02:36:02.216761 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Apr 16 02:36:02.218858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 02:36:02.228133 kernel: loop2: detected capacity change from 0 to 110984 Apr 16 02:36:02.248929 kernel: loop3: detected capacity change from 0 to 219192 Apr 16 02:36:02.256118 kernel: loop4: detected capacity change from 0 to 128560 Apr 16 02:36:02.262960 kernel: loop5: detected capacity change from 0 to 110984 Apr 16 02:36:02.270782 (sd-merge)[1293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 02:36:02.271263 (sd-merge)[1293]: Merged extensions into '/usr'. Apr 16 02:36:02.276022 systemd[1]: Reload requested from client PID 1264 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 02:36:02.276037 systemd[1]: Reloading... Apr 16 02:36:02.323949 zram_generator::config[1321]: No configuration found. Apr 16 02:36:02.363833 ldconfig[1259]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 02:36:02.442468 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 02:36:02.442751 systemd[1]: Reloading finished in 166 ms. Apr 16 02:36:02.464599 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 02:36:02.466399 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 02:36:02.468217 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 02:36:02.485092 systemd[1]: Starting ensure-sysext.service... Apr 16 02:36:02.486858 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 02:36:02.489204 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 02:36:02.498420 systemd[1]: Reload requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Apr 16 02:36:02.498452 systemd[1]: Reloading... Apr 16 02:36:02.499654 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 02:36:02.499686 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 02:36:02.499829 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 02:36:02.499995 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 02:36:02.500433 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 02:36:02.500596 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Apr 16 02:36:02.500628 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Apr 16 02:36:02.503129 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:36:02.503140 systemd-tmpfiles[1359]: Skipping /boot Apr 16 02:36:02.507616 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 02:36:02.507636 systemd-tmpfiles[1359]: Skipping /boot Apr 16 02:36:02.509949 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Apr 16 02:36:02.539006 zram_generator::config[1399]: No configuration found. Apr 16 02:36:02.618965 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 02:36:02.626958 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 16 02:36:02.636160 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 16 02:36:02.652094 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 02:36:02.652191 kernel: ACPI: button: Power Button [PWRF] Apr 16 02:36:02.652208 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 02:36:02.695072 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 02:36:02.695294 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 02:36:02.697236 systemd[1]: Reloading finished in 198 ms. Apr 16 02:36:02.744423 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 02:36:02.746611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 02:36:02.800905 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 02:36:02.803325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 02:36:02.805006 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 02:36:02.805648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 02:36:02.807792 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 02:36:02.813675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 02:36:02.816312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 02:36:02.818081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 02:36:02.819229 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 02:36:02.820775 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 02:36:02.823085 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 02:36:02.826016 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 02:36:02.829287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 02:36:02.833058 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 02:36:02.835017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 02:36:02.843735 systemd[1]: Finished ensure-sysext.service. Apr 16 02:36:02.845262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 02:36:02.845401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 02:36:02.846375 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 02:36:02.846509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 02:36:02.846844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 02:36:02.846976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 02:36:02.847254 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 02:36:02.847359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 02:36:02.847719 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 02:36:02.853813 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:36:02.853967 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 02:36:02.854047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 02:36:02.854357 augenrules[1517]: No rules Apr 16 02:36:02.855574 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 02:36:02.858156 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 02:36:02.858693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 02:36:02.859018 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 02:36:02.859295 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 02:36:02.859595 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 02:36:02.860092 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 02:36:02.862891 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 02:36:02.876411 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 02:36:02.877430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 02:36:02.878180 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 02:36:02.883531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 02:36:02.898999 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 02:36:02.942406 systemd-networkd[1492]: lo: Link UP Apr 16 02:36:02.942633 systemd-networkd[1492]: lo: Gained carrier Apr 16 02:36:02.944275 systemd-networkd[1492]: Enumeration completed Apr 16 02:36:02.944438 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 02:36:02.944702 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:36:02.944705 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 02:36:02.945154 systemd-networkd[1492]: eth0: Link UP Apr 16 02:36:02.945226 systemd-networkd[1492]: eth0: Gained carrier Apr 16 02:36:02.945237 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 02:36:02.947032 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 02:36:02.948315 systemd-resolved[1494]: Positive Trust Anchors: Apr 16 02:36:02.948335 systemd-resolved[1494]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 02:36:02.948359 systemd-resolved[1494]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 02:36:02.950288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 02:36:02.951587 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 02:36:02.951760 systemd-resolved[1494]: Defaulting to hostname 'linux'. Apr 16 02:36:02.954370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 02:36:02.956054 systemd[1]: Reached target network.target - Network. Apr 16 02:36:02.957313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 02:36:02.958854 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 02:36:02.960335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 02:36:02.962011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 02:36:02.963690 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 02:36:02.965142 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 02:36:02.966850 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 02:36:02.966872 systemd[1]: Reached target paths.target - Path Units. Apr 16 02:36:02.967012 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 02:36:02.968113 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 02:36:02.969633 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 02:36:02.970661 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Apr 16 02:36:02.971153 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 02:36:04.105789 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 02:36:04.105815 systemd-timesyncd[1522]: Initial clock synchronization to Thu 2026-04-16 02:36:04.105740 UTC. Apr 16 02:36:04.106078 systemd-resolved[1494]: Clock change detected. Flushing caches. Apr 16 02:36:04.107036 systemd[1]: Reached target timers.target - Timer Units. Apr 16 02:36:04.108841 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 02:36:04.111234 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 02:36:04.113866 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 02:36:04.115586 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 02:36:04.117209 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 02:36:04.127461 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 02:36:04.129107 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 02:36:04.131386 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 02:36:04.133074 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 02:36:04.135548 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 02:36:04.136845 systemd[1]: Reached target basic.target - Basic System. Apr 16 02:36:04.138076 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:36:04.138105 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 02:36:04.138820 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 02:36:04.140799 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 02:36:04.142377 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 02:36:04.146875 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 02:36:04.149587 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 02:36:04.150897 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 02:36:04.153627 jq[1551]: false Apr 16 02:36:04.153921 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 02:36:04.155118 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 02:36:04.157646 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 02:36:04.160829 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 02:36:04.163315 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 02:36:04.166236 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing passwd entry cache Apr 16 02:36:04.166807 oslogin_cache_refresh[1553]: Refreshing passwd entry cache Apr 16 02:36:04.166901 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 02:36:04.168811 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 02:36:04.169114 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 02:36:04.171314 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 02:36:04.173916 extend-filesystems[1552]: Found /dev/vda6 Apr 16 02:36:04.173744 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 02:36:04.178905 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting users, quitting Apr 16 02:36:04.178905 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 02:36:04.178905 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing group entry cache Apr 16 02:36:04.178978 extend-filesystems[1552]: Found /dev/vda9 Apr 16 02:36:04.178574 oslogin_cache_refresh[1553]: Failure getting users, quitting Apr 16 02:36:04.178587 oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 02:36:04.178619 oslogin_cache_refresh[1553]: Refreshing group entry cache Apr 16 02:36:04.181295 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 02:36:04.183486 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 02:36:04.183772 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 02:36:04.184779 jq[1569]: true Apr 16 02:36:04.183994 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 02:36:04.184209 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 02:36:04.185260 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting groups, quitting Apr 16 02:36:04.185260 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 02:36:04.185239 oslogin_cache_refresh[1553]: Failure getting groups, quitting Apr 16 02:36:04.185245 oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 02:36:04.186253 extend-filesystems[1552]: Checking size of /dev/vda9 Apr 16 02:36:04.185934 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 02:36:04.186060 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 02:36:04.189312 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 02:36:04.189471 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 02:36:04.200237 update_engine[1566]: I20260416 02:36:04.198707 1566 main.cc:92] Flatcar Update Engine starting Apr 16 02:36:04.200921 extend-filesystems[1552]: Resized partition /dev/vda9 Apr 16 02:36:04.203110 (ntainerd)[1578]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 02:36:04.203621 extend-filesystems[1592]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 02:36:04.209160 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 02:36:04.212867 jq[1577]: true Apr 16 02:36:04.215906 tar[1576]: linux-amd64/LICENSE Apr 16 02:36:04.220691 tar[1576]: linux-amd64/helm Apr 16 02:36:04.227971 systemd-logind[1564]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 02:36:04.227987 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 02:36:04.228423 systemd-logind[1564]: New seat seat0. Apr 16 02:36:04.229376 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 02:36:04.233457 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 02:36:04.237369 dbus-daemon[1549]: [system] SELinux support is enabled Apr 16 02:36:04.237483 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 02:36:04.241611 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 02:36:04.249843 extend-filesystems[1592]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 02:36:04.249843 extend-filesystems[1592]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 02:36:04.249843 extend-filesystems[1592]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 02:36:04.248602 dbus-daemon[1549]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 02:36:04.259364 update_engine[1566]: I20260416 02:36:04.246466 1566 update_check_scheduler.cc:74] Next update check in 5m15s Apr 16 02:36:04.241632 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 02:36:04.259419 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Apr 16 02:36:04.244569 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 02:36:04.244583 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 02:36:04.252936 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 02:36:04.253080 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 02:36:04.265386 systemd[1]: Started update-engine.service - Update Engine. Apr 16 02:36:04.271259 bash[1611]: Updated "/home/core/.ssh/authorized_keys" Apr 16 02:36:04.274421 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 02:36:04.277697 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 02:36:04.288347 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 02:36:04.315563 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 02:36:04.356007 containerd[1578]: time="2026-04-16T02:36:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 02:36:04.356650 containerd[1578]: time="2026-04-16T02:36:04.356605735Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 02:36:04.363102 containerd[1578]: time="2026-04-16T02:36:04.363063310Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.447µs" Apr 16 02:36:04.363102 containerd[1578]: time="2026-04-16T02:36:04.363091304Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 02:36:04.363180 containerd[1578]: time="2026-04-16T02:36:04.363108290Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 02:36:04.363555 containerd[1578]: time="2026-04-16T02:36:04.363507581Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 02:36:04.363555 containerd[1578]: time="2026-04-16T02:36:04.363545386Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 02:36:04.363629 containerd[1578]: time="2026-04-16T02:36:04.363565662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 02:36:04.363629 containerd[1578]: time="2026-04-16T02:36:04.363605345Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 02:36:04.363629 containerd[1578]: time="2026-04-16T02:36:04.363613623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.363808901Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.363850277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.363859876Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.363865232Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.363946231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.364373406Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.364446718Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.364455838Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.364473439Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.364929038Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 02:36:04.365076 containerd[1578]: time="2026-04-16T02:36:04.364970093Z" level=info msg="metadata content store policy set" policy=shared Apr 16 02:36:04.369619 containerd[1578]: time="2026-04-16T02:36:04.369581191Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 02:36:04.369659 containerd[1578]: time="2026-04-16T02:36:04.369621849Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 02:36:04.369659 containerd[1578]: time="2026-04-16T02:36:04.369632361Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 02:36:04.369659 containerd[1578]: time="2026-04-16T02:36:04.369646803Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 02:36:04.369659 containerd[1578]: time="2026-04-16T02:36:04.369655681Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 02:36:04.369742 containerd[1578]: time="2026-04-16T02:36:04.369677424Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 02:36:04.369742 containerd[1578]: time="2026-04-16T02:36:04.369688749Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 02:36:04.369742 containerd[1578]: time="2026-04-16T02:36:04.369697541Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 02:36:04.369742 containerd[1578]: time="2026-04-16T02:36:04.369705610Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 02:36:04.369742 containerd[1578]: time="2026-04-16T02:36:04.369712778Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 02:36:04.369742 containerd[1578]: time="2026-04-16T02:36:04.369718980Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 02:36:04.369742 containerd[1578]: time="2026-04-16T02:36:04.369727114Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 02:36:04.369829 containerd[1578]: time="2026-04-16T02:36:04.369802460Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 02:36:04.369829 containerd[1578]: time="2026-04-16T02:36:04.369814779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 02:36:04.369829 containerd[1578]: time="2026-04-16T02:36:04.369824900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 02:36:04.369865 containerd[1578]: time="2026-04-16T02:36:04.369833496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 02:36:04.369865 containerd[1578]: time="2026-04-16T02:36:04.369845721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 02:36:04.369865 containerd[1578]: time="2026-04-16T02:36:04.369853681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 02:36:04.369865 containerd[1578]: time="2026-04-16T02:36:04.369861757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 02:36:04.369919 containerd[1578]: time="2026-04-16T02:36:04.369868806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 02:36:04.369919 containerd[1578]: time="2026-04-16T02:36:04.369877308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 02:36:04.369919 containerd[1578]: time="2026-04-16T02:36:04.369884414Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 02:36:04.369919 containerd[1578]: time="2026-04-16T02:36:04.369890824Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 02:36:04.369971 containerd[1578]: time="2026-04-16T02:36:04.369920218Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 02:36:04.369971 containerd[1578]: time="2026-04-16T02:36:04.369928821Z" level=info msg="Start snapshots syncer" Apr 16 02:36:04.369971 containerd[1578]: time="2026-04-16T02:36:04.369942360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 02:36:04.370224 containerd[1578]: time="2026-04-16T02:36:04.370121488Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 02:36:04.370326 containerd[1578]: time="2026-04-16T02:36:04.370293732Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 02:36:04.371320 containerd[1578]: time="2026-04-16T02:36:04.371277912Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 02:36:04.371398 containerd[1578]: time="2026-04-16T02:36:04.371379244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 02:36:04.371418 containerd[1578]: time="2026-04-16T02:36:04.371405806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 02:36:04.371418 containerd[1578]: time="2026-04-16T02:36:04.371414699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 02:36:04.371450 containerd[1578]: time="2026-04-16T02:36:04.371421862Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 02:36:04.371450 containerd[1578]: time="2026-04-16T02:36:04.371431539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 02:36:04.371450 containerd[1578]: time="2026-04-16T02:36:04.371440874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 02:36:04.371450 containerd[1578]: time="2026-04-16T02:36:04.371449300Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 02:36:04.371515 containerd[1578]: time="2026-04-16T02:36:04.371467512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 02:36:04.371515 containerd[1578]: time="2026-04-16T02:36:04.371475728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 02:36:04.371515 containerd[1578]: time="2026-04-16T02:36:04.371482890Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 02:36:04.371515 containerd[1578]: time="2026-04-16T02:36:04.371504511Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371514824Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371521263Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371527908Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371532970Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371543458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371554326Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371565567Z" level=info msg="runtime interface created" Apr 16 02:36:04.371569 containerd[1578]: time="2026-04-16T02:36:04.371569247Z" level=info msg="created NRI interface" Apr 16 02:36:04.371686 containerd[1578]: time="2026-04-16T02:36:04.371575623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 02:36:04.371686 containerd[1578]: time="2026-04-16T02:36:04.371584099Z" level=info msg="Connect containerd service" Apr 16 02:36:04.371686 containerd[1578]: time="2026-04-16T02:36:04.371604269Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 02:36:04.372162 containerd[1578]: time="2026-04-16T02:36:04.372067014Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 02:36:04.433030 containerd[1578]: time="2026-04-16T02:36:04.432981546Z" level=info msg="Start subscribing containerd event" Apr 16 02:36:04.433514 containerd[1578]: time="2026-04-16T02:36:04.433477595Z" level=info msg="Start recovering state" Apr 16 02:36:04.433601 containerd[1578]: time="2026-04-16T02:36:04.433568480Z" level=info msg="Start event monitor" Apr 16 02:36:04.433601 containerd[1578]: time="2026-04-16T02:36:04.433581734Z" level=info msg="Start cni network conf syncer for default" Apr 16 02:36:04.433601 containerd[1578]: time="2026-04-16T02:36:04.433587625Z" level=info msg="Start streaming server" Apr 16 02:36:04.433601 containerd[1578]: time="2026-04-16T02:36:04.433594566Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 02:36:04.433601 containerd[1578]: time="2026-04-16T02:36:04.433599964Z" level=info msg="runtime interface starting up..." Apr 16 02:36:04.433684 containerd[1578]: time="2026-04-16T02:36:04.433604080Z" level=info msg="starting plugins..." Apr 16 02:36:04.433684 containerd[1578]: time="2026-04-16T02:36:04.433617010Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 02:36:04.433684 containerd[1578]: time="2026-04-16T02:36:04.433116340Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 02:36:04.433792 containerd[1578]: time="2026-04-16T02:36:04.433755512Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 02:36:04.433910 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 02:36:04.435410 containerd[1578]: time="2026-04-16T02:36:04.435344871Z" level=info msg="containerd successfully booted in 0.079694s" Apr 16 02:36:04.517303 tar[1576]: linux-amd64/README.md Apr 16 02:36:04.530344 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 02:36:04.550805 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 02:36:04.567265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 02:36:04.569925 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 02:36:04.586961 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 02:36:04.587156 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 02:36:04.589869 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 02:36:04.603420 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 02:36:04.606076 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 02:36:04.608335 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 02:36:04.609977 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 02:36:06.003607 systemd-networkd[1492]: eth0: Gained IPv6LL Apr 16 02:36:06.005668 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 02:36:06.007698 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 02:36:06.010061 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 02:36:06.012386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:36:06.018964 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 02:36:06.031534 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 02:36:06.031710 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 02:36:06.033434 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 02:36:06.034373 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 02:36:06.603202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:36:06.604972 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 02:36:06.607003 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:36:06.608214 systemd[1]: Startup finished in 2.656s (kernel) + 4.882s (initrd) + 4.010s (userspace) = 11.549s. Apr 16 02:36:06.943002 kubelet[1684]: E0416 02:36:06.942819 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:36:06.944447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:36:06.944567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:36:06.944840 systemd[1]: kubelet.service: Consumed 750ms CPU time, 257.3M memory peak. Apr 16 02:36:10.940906 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 02:36:10.941885 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:48252.service - OpenSSH per-connection server daemon (10.0.0.1:48252). Apr 16 02:36:10.995680 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 48252 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:36:10.997254 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:11.008889 systemd-logind[1564]: New session 1 of user core. Apr 16 02:36:11.010110 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 02:36:11.011376 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 02:36:11.030907 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 02:36:11.033040 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 02:36:11.048957 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 02:36:11.051253 systemd-logind[1564]: New session c1 of user core. Apr 16 02:36:11.143649 systemd[1702]: Queued start job for default target default.target. Apr 16 02:36:11.152977 systemd[1702]: Created slice app.slice - User Application Slice. Apr 16 02:36:11.153016 systemd[1702]: Reached target paths.target - Paths. Apr 16 02:36:11.153062 systemd[1702]: Reached target timers.target - Timers. Apr 16 02:36:11.153977 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 02:36:11.162189 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 02:36:11.162281 systemd[1702]: Reached target sockets.target - Sockets. Apr 16 02:36:11.162329 systemd[1702]: Reached target basic.target - Basic System. Apr 16 02:36:11.162367 systemd[1702]: Reached target default.target - Main User Target. Apr 16 02:36:11.162403 systemd[1702]: Startup finished in 106ms. Apr 16 02:36:11.162410 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 02:36:11.183287 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 02:36:11.191602 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:48258.service - OpenSSH per-connection server daemon (10.0.0.1:48258). Apr 16 02:36:11.231160 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 48258 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:36:11.231964 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:11.235471 systemd-logind[1564]: New session 2 of user core. Apr 16 02:36:11.242263 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 02:36:11.251147 sshd[1716]: Connection closed by 10.0.0.1 port 48258 Apr 16 02:36:11.251391 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:11.257504 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:48258.service: Deactivated successfully. Apr 16 02:36:11.258515 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 02:36:11.259124 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Apr 16 02:36:11.260576 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:48268.service - OpenSSH per-connection server daemon (10.0.0.1:48268). Apr 16 02:36:11.261359 systemd-logind[1564]: Removed session 2. Apr 16 02:36:11.300342 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 48268 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:36:11.301069 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:11.304794 systemd-logind[1564]: New session 3 of user core. Apr 16 02:36:11.314309 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 02:36:11.319342 sshd[1725]: Connection closed by 10.0.0.1 port 48268 Apr 16 02:36:11.319585 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:11.329564 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:48268.service: Deactivated successfully. Apr 16 02:36:11.330867 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 02:36:11.331567 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Apr 16 02:36:11.333396 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:48272.service - OpenSSH per-connection server daemon (10.0.0.1:48272). Apr 16 02:36:11.334353 systemd-logind[1564]: Removed session 3. Apr 16 02:36:11.374673 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 48272 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:36:11.375609 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:11.379575 systemd-logind[1564]: New session 4 of user core. Apr 16 02:36:11.394280 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 02:36:11.403197 sshd[1734]: Connection closed by 10.0.0.1 port 48272 Apr 16 02:36:11.403391 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:11.413774 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:48272.service: Deactivated successfully. Apr 16 02:36:11.414861 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 02:36:11.415633 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Apr 16 02:36:11.416981 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:48280.service - OpenSSH per-connection server daemon (10.0.0.1:48280). Apr 16 02:36:11.417616 systemd-logind[1564]: Removed session 4. Apr 16 02:36:11.456515 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 48280 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:36:11.457449 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:11.462186 systemd-logind[1564]: New session 5 of user core. Apr 16 02:36:11.471548 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 02:36:11.485652 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 02:36:11.485942 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:36:11.499987 sudo[1744]: pam_unix(sudo:session): session closed for user root Apr 16 02:36:11.501468 sshd[1743]: Connection closed by 10.0.0.1 port 48280 Apr 16 02:36:11.501824 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:11.512662 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:48280.service: Deactivated successfully. Apr 16 02:36:11.513769 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 02:36:11.514559 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Apr 16 02:36:11.515960 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:48292.service - OpenSSH per-connection server daemon (10.0.0.1:48292). Apr 16 02:36:11.516898 systemd-logind[1564]: Removed session 5. Apr 16 02:36:11.569934 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 48292 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:36:11.571162 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:11.575425 systemd-logind[1564]: New session 6 of user core. Apr 16 02:36:11.585411 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 02:36:11.593888 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 02:36:11.594067 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:36:11.597286 sudo[1755]: pam_unix(sudo:session): session closed for user root Apr 16 02:36:11.601176 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 02:36:11.601352 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:36:11.609806 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 02:36:11.640919 augenrules[1777]: No rules Apr 16 02:36:11.642057 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 02:36:11.642353 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 02:36:11.643187 sudo[1754]: pam_unix(sudo:session): session closed for user root Apr 16 02:36:11.644108 sshd[1753]: Connection closed by 10.0.0.1 port 48292 Apr 16 02:36:11.644415 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:11.656867 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:48292.service: Deactivated successfully. Apr 16 02:36:11.658009 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 02:36:11.658794 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Apr 16 02:36:11.660201 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:48308.service - OpenSSH per-connection server daemon (10.0.0.1:48308). Apr 16 02:36:11.661182 systemd-logind[1564]: Removed session 6. Apr 16 02:36:11.707536 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 48308 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:36:11.708402 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:36:11.712005 systemd-logind[1564]: New session 7 of user core. Apr 16 02:36:11.724293 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 02:36:11.732655 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 02:36:11.732862 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 02:36:11.982709 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 02:36:11.999359 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 02:36:12.184492 dockerd[1810]: time="2026-04-16T02:36:12.184396761Z" level=info msg="Starting up" Apr 16 02:36:12.185554 dockerd[1810]: time="2026-04-16T02:36:12.185507263Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 02:36:12.197196 dockerd[1810]: time="2026-04-16T02:36:12.197100936Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 02:36:12.284515 dockerd[1810]: time="2026-04-16T02:36:12.284368619Z" level=info msg="Loading containers: start." Apr 16 02:36:12.295150 kernel: Initializing XFRM netlink socket Apr 16 02:36:12.496694 systemd-networkd[1492]: docker0: Link UP Apr 16 02:36:12.500689 dockerd[1810]: time="2026-04-16T02:36:12.500650213Z" level=info msg="Loading containers: done." Apr 16 02:36:12.511006 dockerd[1810]: time="2026-04-16T02:36:12.510956946Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 02:36:12.511118 dockerd[1810]: time="2026-04-16T02:36:12.511023703Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 02:36:12.511118 dockerd[1810]: time="2026-04-16T02:36:12.511081709Z" level=info msg="Initializing buildkit" Apr 16 02:36:12.529785 dockerd[1810]: time="2026-04-16T02:36:12.529736790Z" level=info msg="Completed buildkit initialization" Apr 16 02:36:12.534392 dockerd[1810]: time="2026-04-16T02:36:12.534368177Z" level=info msg="Daemon has completed initialization" Apr 16 02:36:12.534476 dockerd[1810]: time="2026-04-16T02:36:12.534421724Z" level=info msg="API listen on /run/docker.sock" Apr 16 02:36:12.534607 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 02:36:12.873215 containerd[1578]: time="2026-04-16T02:36:12.873076387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 02:36:13.337478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179871495.mount: Deactivated successfully. Apr 16 02:36:13.879791 containerd[1578]: time="2026-04-16T02:36:13.879711755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:13.880148 containerd[1578]: time="2026-04-16T02:36:13.880094249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 16 02:36:13.880915 containerd[1578]: time="2026-04-16T02:36:13.880876289Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:13.882794 containerd[1578]: time="2026-04-16T02:36:13.882717936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:13.883576 containerd[1578]: time="2026-04-16T02:36:13.883544716Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.010431396s" Apr 16 02:36:13.883632 containerd[1578]: time="2026-04-16T02:36:13.883577057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 02:36:13.884051 containerd[1578]: time="2026-04-16T02:36:13.884032787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 02:36:14.628083 containerd[1578]: time="2026-04-16T02:36:14.628035407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:14.628637 containerd[1578]: time="2026-04-16T02:36:14.628509622Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 16 02:36:14.629206 containerd[1578]: time="2026-04-16T02:36:14.629184621Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:14.631071 containerd[1578]: time="2026-04-16T02:36:14.631044212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:14.631817 containerd[1578]: time="2026-04-16T02:36:14.631796348Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 747.740345ms" Apr 16 02:36:14.631853 containerd[1578]: time="2026-04-16T02:36:14.631822475Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 02:36:14.632312 containerd[1578]: time="2026-04-16T02:36:14.632290092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 02:36:15.184199 containerd[1578]: time="2026-04-16T02:36:15.184149445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:15.184538 containerd[1578]: time="2026-04-16T02:36:15.184515560Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 16 02:36:15.185396 containerd[1578]: time="2026-04-16T02:36:15.185348026Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:15.187099 containerd[1578]: time="2026-04-16T02:36:15.187067542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:15.187794 containerd[1578]: time="2026-04-16T02:36:15.187766442Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 555.449207ms" Apr 16 02:36:15.187830 containerd[1578]: time="2026-04-16T02:36:15.187792710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 02:36:15.188212 containerd[1578]: time="2026-04-16T02:36:15.188194273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 02:36:15.854919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542706989.mount: Deactivated successfully. Apr 16 02:36:16.031531 containerd[1578]: time="2026-04-16T02:36:16.031480899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:16.031993 containerd[1578]: time="2026-04-16T02:36:16.031879538Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 16 02:36:16.032575 containerd[1578]: time="2026-04-16T02:36:16.032532768Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:16.034008 containerd[1578]: time="2026-04-16T02:36:16.033965593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:16.034504 containerd[1578]: time="2026-04-16T02:36:16.034469655Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 846.249742ms" Apr 16 02:36:16.034504 containerd[1578]: time="2026-04-16T02:36:16.034499757Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 02:36:16.034957 containerd[1578]: time="2026-04-16T02:36:16.034934092Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 02:36:16.446411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225355015.mount: Deactivated successfully. Apr 16 02:36:16.973291 containerd[1578]: time="2026-04-16T02:36:16.973224944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:16.973688 containerd[1578]: time="2026-04-16T02:36:16.973662280Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 16 02:36:16.974529 containerd[1578]: time="2026-04-16T02:36:16.974493022Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:16.976408 containerd[1578]: time="2026-04-16T02:36:16.976384018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:16.977198 containerd[1578]: time="2026-04-16T02:36:16.977163886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 942.199759ms" Apr 16 02:36:16.977231 containerd[1578]: time="2026-04-16T02:36:16.977197342Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 02:36:16.977635 containerd[1578]: time="2026-04-16T02:36:16.977588349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 02:36:17.195060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 02:36:17.196262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:36:17.329701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:36:17.332569 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 02:36:17.365244 kubelet[2165]: E0416 02:36:17.365190 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 02:36:17.367831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 02:36:17.367943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 02:36:17.368241 systemd[1]: kubelet.service: Consumed 135ms CPU time, 110M memory peak. Apr 16 02:36:17.403525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051869236.mount: Deactivated successfully. Apr 16 02:36:17.406959 containerd[1578]: time="2026-04-16T02:36:17.406928051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:17.407291 containerd[1578]: time="2026-04-16T02:36:17.407264408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 02:36:17.407967 containerd[1578]: time="2026-04-16T02:36:17.407932595Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:17.409281 containerd[1578]: time="2026-04-16T02:36:17.409248689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:17.409623 containerd[1578]: time="2026-04-16T02:36:17.409587737Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 431.960274ms" Apr 16 02:36:17.409623 containerd[1578]: time="2026-04-16T02:36:17.409617779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 02:36:17.410081 containerd[1578]: time="2026-04-16T02:36:17.410058678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 02:36:17.757075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274393789.mount: Deactivated successfully. Apr 16 02:36:18.245248 containerd[1578]: time="2026-04-16T02:36:18.245106539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:18.245527 containerd[1578]: time="2026-04-16T02:36:18.245471094Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 16 02:36:18.246665 containerd[1578]: time="2026-04-16T02:36:18.246619403Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:18.249010 containerd[1578]: time="2026-04-16T02:36:18.248954900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:18.250002 containerd[1578]: time="2026-04-16T02:36:18.249973000Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 839.885729ms" Apr 16 02:36:18.250036 containerd[1578]: time="2026-04-16T02:36:18.250011751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 02:36:21.013689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:36:21.013828 systemd[1]: kubelet.service: Consumed 135ms CPU time, 110M memory peak. Apr 16 02:36:21.015329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:36:21.031364 systemd[1]: Reload requested from client PID 2269 ('systemctl') (unit session-7.scope)... Apr 16 02:36:21.031382 systemd[1]: Reloading... Apr 16 02:36:21.091199 zram_generator::config[2310]: No configuration found. Apr 16 02:36:21.228532 systemd[1]: Reloading finished in 196 ms. Apr 16 02:36:21.269168 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 02:36:21.269231 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 02:36:21.269409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:36:21.269443 systemd[1]: kubelet.service: Consumed 78ms CPU time, 98.3M memory peak. Apr 16 02:36:21.270578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:36:21.398079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:36:21.400984 (kubelet)[2361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:36:21.431615 kubelet[2361]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 02:36:21.431615 kubelet[2361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:36:21.431845 kubelet[2361]: I0416 02:36:21.431651 2361 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 02:36:21.832281 kubelet[2361]: I0416 02:36:21.832234 2361 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 02:36:21.832281 kubelet[2361]: I0416 02:36:21.832260 2361 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:36:21.834394 kubelet[2361]: I0416 02:36:21.834366 2361 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:36:21.834394 kubelet[2361]: I0416 02:36:21.834390 2361 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:36:21.834752 kubelet[2361]: I0416 02:36:21.834726 2361 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 02:36:21.858658 kubelet[2361]: E0416 02:36:21.858604 2361 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 02:36:21.858801 kubelet[2361]: I0416 02:36:21.858737 2361 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:36:21.861960 kubelet[2361]: I0416 02:36:21.861946 2361 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 02:36:21.865526 kubelet[2361]: I0416 02:36:21.865510 2361 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:36:21.866379 kubelet[2361]: I0416 02:36:21.866342 2361 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:36:21.866938 kubelet[2361]: I0416 02:36:21.866371 2361 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:36:21.866938 kubelet[2361]: I0416 02:36:21.866933 2361 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 02:36:21.867075 kubelet[2361]: I0416 02:36:21.866942 2361 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 02:36:21.867075 kubelet[2361]: I0416 02:36:21.867006 2361 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:36:21.868465 kubelet[2361]: I0416 02:36:21.868407 2361 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:36:21.868570 kubelet[2361]: I0416 02:36:21.868557 2361 kubelet.go:475] "Attempting to sync node with API server" Apr 16 02:36:21.868589 kubelet[2361]: I0416 02:36:21.868572 2361 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:36:21.868605 kubelet[2361]: I0416 02:36:21.868589 2361 kubelet.go:387] "Adding apiserver pod source" Apr 16 02:36:21.868620 kubelet[2361]: I0416 02:36:21.868606 2361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:36:21.869199 kubelet[2361]: E0416 02:36:21.869161 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:36:21.869329 kubelet[2361]: E0416 02:36:21.869167 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:36:21.870477 kubelet[2361]: I0416 02:36:21.870436 2361 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 02:36:21.870857 kubelet[2361]: I0416 02:36:21.870835 2361 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:36:21.870899 kubelet[2361]: I0416 02:36:21.870865 2361 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:36:21.870926 kubelet[2361]: W0416 02:36:21.870907 2361 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 02:36:21.874117 kubelet[2361]: I0416 02:36:21.874084 2361 server.go:1262] "Started kubelet" Apr 16 02:36:21.877295 kubelet[2361]: I0416 02:36:21.877254 2361 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:36:21.878163 kubelet[2361]: I0416 02:36:21.878094 2361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 02:36:21.879322 kubelet[2361]: I0416 02:36:21.879305 2361 server.go:310] "Adding debug handlers to kubelet server" Apr 16 02:36:21.879543 kubelet[2361]: I0416 02:36:21.879535 2361 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 02:36:21.879725 kubelet[2361]: E0416 02:36:21.879714 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:21.879806 kubelet[2361]: E0416 02:36:21.878910 2361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b5c9eaa6602d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 02:36:21.874065453 +0000 UTC m=+0.470534168,LastTimestamp:2026-04-16 02:36:21.874065453 +0000 UTC m=+0.470534168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 02:36:21.880182 kubelet[2361]: I0416 02:36:21.880173 2361 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:36:21.880273 kubelet[2361]: I0416 02:36:21.880268 2361 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:36:21.880605 kubelet[2361]: I0416 02:36:21.880528 2361 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:36:21.880605 kubelet[2361]: I0416 02:36:21.880593 2361 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:36:21.880704 kubelet[2361]: I0416 02:36:21.880669 2361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:36:21.880768 kubelet[2361]: E0416 02:36:21.880717 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" Apr 16 02:36:21.880819 kubelet[2361]: I0416 02:36:21.880785 2361 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:36:21.880911 kubelet[2361]: I0416 02:36:21.880894 2361 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:36:21.880962 kubelet[2361]: I0416 02:36:21.880947 2361 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:36:21.881197 kubelet[2361]: E0416 02:36:21.881177 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 02:36:21.882260 kubelet[2361]: E0416 02:36:21.882238 2361 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 02:36:21.883005 kubelet[2361]: I0416 02:36:21.882974 2361 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:36:21.891332 kubelet[2361]: I0416 02:36:21.891289 2361 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 02:36:21.891332 kubelet[2361]: I0416 02:36:21.891307 2361 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 02:36:21.891332 kubelet[2361]: I0416 02:36:21.891318 2361 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:36:21.894254 kubelet[2361]: I0416 02:36:21.894233 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:36:21.894404 kubelet[2361]: I0416 02:36:21.894396 2361 policy_none.go:49] "None policy: Start" Apr 16 02:36:21.894673 kubelet[2361]: I0416 02:36:21.894525 2361 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:36:21.894673 kubelet[2361]: I0416 02:36:21.894536 2361 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:36:21.895191 kubelet[2361]: I0416 02:36:21.895173 2361 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:36:21.895191 kubelet[2361]: I0416 02:36:21.895193 2361 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 02:36:21.895275 kubelet[2361]: I0416 02:36:21.895207 2361 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 02:36:21.895275 kubelet[2361]: E0416 02:36:21.895232 2361 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:36:21.895666 kubelet[2361]: E0416 02:36:21.895630 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 02:36:21.896663 kubelet[2361]: I0416 02:36:21.896088 2361 policy_none.go:47] "Start" Apr 16 02:36:21.899425 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 02:36:21.910828 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 02:36:21.912825 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 02:36:21.918697 kubelet[2361]: E0416 02:36:21.918665 2361 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:36:21.918818 kubelet[2361]: I0416 02:36:21.918785 2361 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 02:36:21.918863 kubelet[2361]: I0416 02:36:21.918813 2361 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:36:21.918955 kubelet[2361]: I0416 02:36:21.918939 2361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 02:36:21.919641 kubelet[2361]: E0416 02:36:21.919606 2361 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:36:21.919641 kubelet[2361]: E0416 02:36:21.919641 2361 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 02:36:22.004719 systemd[1]: Created slice kubepods-burstable-pod42ae3a1e9a61a348c94b728e90bcebdb.slice - libcontainer container kubepods-burstable-pod42ae3a1e9a61a348c94b728e90bcebdb.slice. Apr 16 02:36:22.020075 kubelet[2361]: I0416 02:36:22.020056 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:36:22.020372 kubelet[2361]: E0416 02:36:22.020338 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Apr 16 02:36:22.022056 kubelet[2361]: E0416 02:36:22.022028 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:22.023973 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 16 02:36:22.036419 kubelet[2361]: E0416 02:36:22.036377 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:22.038117 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 16 02:36:22.039282 kubelet[2361]: E0416 02:36:22.039265 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:22.081625 kubelet[2361]: I0416 02:36:22.081451 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42ae3a1e9a61a348c94b728e90bcebdb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42ae3a1e9a61a348c94b728e90bcebdb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:22.081625 kubelet[2361]: I0416 02:36:22.081553 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:22.081625 kubelet[2361]: I0416 02:36:22.081598 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:22.081723 kubelet[2361]: E0416 02:36:22.081662 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" Apr 16 02:36:22.081723 kubelet[2361]: I0416 02:36:22.081691 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:22.081931 kubelet[2361]: I0416 02:36:22.081858 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:22.081931 kubelet[2361]: I0416 02:36:22.081904 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42ae3a1e9a61a348c94b728e90bcebdb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42ae3a1e9a61a348c94b728e90bcebdb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:22.081931 kubelet[2361]: I0416 02:36:22.081921 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:22.081931 kubelet[2361]: I0416 02:36:22.081938 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:36:22.082084 kubelet[2361]: I0416 02:36:22.081955 2361 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42ae3a1e9a61a348c94b728e90bcebdb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42ae3a1e9a61a348c94b728e90bcebdb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:22.222051 kubelet[2361]: I0416 02:36:22.221929 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:36:22.222297 kubelet[2361]: E0416 02:36:22.222253 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Apr 16 02:36:22.338145 kubelet[2361]: E0416 02:36:22.338077 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.338749 containerd[1578]: time="2026-04-16T02:36:22.338717715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42ae3a1e9a61a348c94b728e90bcebdb,Namespace:kube-system,Attempt:0,}" Apr 16 02:36:22.340067 kubelet[2361]: E0416 02:36:22.340049 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.340439 containerd[1578]: time="2026-04-16T02:36:22.340395608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 16 02:36:22.341773 kubelet[2361]: E0416 02:36:22.341628 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.341897 containerd[1578]: time="2026-04-16T02:36:22.341874057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 16 02:36:22.483012 kubelet[2361]: E0416 02:36:22.482861 2361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" Apr 16 02:36:22.623576 kubelet[2361]: I0416 02:36:22.623523 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:36:22.623868 kubelet[2361]: E0416 02:36:22.623832 2361 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" Apr 16 02:36:22.700686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674245127.mount: Deactivated successfully. Apr 16 02:36:22.704559 containerd[1578]: time="2026-04-16T02:36:22.704510341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:36:22.705663 containerd[1578]: time="2026-04-16T02:36:22.705630568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 02:36:22.707952 containerd[1578]: time="2026-04-16T02:36:22.707903624Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:36:22.708558 containerd[1578]: time="2026-04-16T02:36:22.708518942Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:36:22.708896 containerd[1578]: time="2026-04-16T02:36:22.708878164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:36:22.709244 containerd[1578]: time="2026-04-16T02:36:22.709208658Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 02:36:22.709841 containerd[1578]: time="2026-04-16T02:36:22.709781637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 02:36:22.710750 containerd[1578]: time="2026-04-16T02:36:22.710714906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 02:36:22.711589 containerd[1578]: time="2026-04-16T02:36:22.711569311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 371.50223ms" Apr 16 02:36:22.711984 containerd[1578]: time="2026-04-16T02:36:22.711966148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 370.281701ms" Apr 16 02:36:22.713478 containerd[1578]: time="2026-04-16T02:36:22.713439091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 370.399221ms" Apr 16 02:36:22.732647 containerd[1578]: time="2026-04-16T02:36:22.732623176Z" level=info msg="connecting to shim 7d10fd864cacc793cb3c619752d857be0b4e9720905fd295acef5812e0691a22" address="unix:///run/containerd/s/3b19271b1e71d33de8cf3d41dc5ca6f9dbd241170c30296653d274b4a36f2442" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:22.734793 containerd[1578]: time="2026-04-16T02:36:22.734494360Z" level=info msg="connecting to shim 938cc3606638fa1b47db6182e5558027e4f6e760f9af4454531ab1cfee375b81" address="unix:///run/containerd/s/cb974e89c93ad2268ad1e044f86964ba35441a14ee562a6adc50b1b4951a1199" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:22.740173 containerd[1578]: time="2026-04-16T02:36:22.739424607Z" level=info msg="connecting to shim 669660f7f997ef8315a858e3f6cf7239f59b0e180f7164e95480e151c0b9f737" address="unix:///run/containerd/s/e0e3f3ecb4d5346611a05fce8521e741160fd57f8ba9c9be4efc2e4aed88df26" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:22.753276 systemd[1]: Started cri-containerd-7d10fd864cacc793cb3c619752d857be0b4e9720905fd295acef5812e0691a22.scope - libcontainer container 7d10fd864cacc793cb3c619752d857be0b4e9720905fd295acef5812e0691a22. Apr 16 02:36:22.755756 kubelet[2361]: E0416 02:36:22.755722 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 02:36:22.755828 systemd[1]: Started cri-containerd-938cc3606638fa1b47db6182e5558027e4f6e760f9af4454531ab1cfee375b81.scope - libcontainer container 938cc3606638fa1b47db6182e5558027e4f6e760f9af4454531ab1cfee375b81. Apr 16 02:36:22.759498 systemd[1]: Started cri-containerd-669660f7f997ef8315a858e3f6cf7239f59b0e180f7164e95480e151c0b9f737.scope - libcontainer container 669660f7f997ef8315a858e3f6cf7239f59b0e180f7164e95480e151c0b9f737. Apr 16 02:36:22.793974 containerd[1578]: time="2026-04-16T02:36:22.793948531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d10fd864cacc793cb3c619752d857be0b4e9720905fd295acef5812e0691a22\"" Apr 16 02:36:22.796197 kubelet[2361]: E0416 02:36:22.795592 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.798498 containerd[1578]: time="2026-04-16T02:36:22.798465382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:42ae3a1e9a61a348c94b728e90bcebdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"938cc3606638fa1b47db6182e5558027e4f6e760f9af4454531ab1cfee375b81\"" Apr 16 02:36:22.799085 kubelet[2361]: E0416 02:36:22.799072 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.800312 containerd[1578]: time="2026-04-16T02:36:22.800261419Z" level=info msg="CreateContainer within sandbox \"7d10fd864cacc793cb3c619752d857be0b4e9720905fd295acef5812e0691a22\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 02:36:22.802343 containerd[1578]: time="2026-04-16T02:36:22.802304319Z" level=info msg="CreateContainer within sandbox \"938cc3606638fa1b47db6182e5558027e4f6e760f9af4454531ab1cfee375b81\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 02:36:22.802628 kubelet[2361]: E0416 02:36:22.802607 2361 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 02:36:22.803494 containerd[1578]: time="2026-04-16T02:36:22.803455660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"669660f7f997ef8315a858e3f6cf7239f59b0e180f7164e95480e151c0b9f737\"" Apr 16 02:36:22.803927 kubelet[2361]: E0416 02:36:22.803899 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.806169 containerd[1578]: time="2026-04-16T02:36:22.806147339Z" level=info msg="CreateContainer within sandbox \"669660f7f997ef8315a858e3f6cf7239f59b0e180f7164e95480e151c0b9f737\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 02:36:22.807907 containerd[1578]: time="2026-04-16T02:36:22.807872392Z" level=info msg="Container a5c9960af4e6dd405bcceaad0c6325605f82289b17fd46d03288fa048c8af11f: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:22.811837 containerd[1578]: time="2026-04-16T02:36:22.811367785Z" level=info msg="Container b2b1cd90be5b87a0c404caff8a32dec5837620905273d77b8a63155fe6b5a528: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:22.815042 containerd[1578]: time="2026-04-16T02:36:22.815004988Z" level=info msg="Container f0675dcfe16222e761f990f61e1276c976c71f861b07bd01c992852b7413ef4e: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:22.815278 containerd[1578]: time="2026-04-16T02:36:22.815260431Z" level=info msg="CreateContainer within sandbox \"7d10fd864cacc793cb3c619752d857be0b4e9720905fd295acef5812e0691a22\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a5c9960af4e6dd405bcceaad0c6325605f82289b17fd46d03288fa048c8af11f\"" Apr 16 02:36:22.815774 containerd[1578]: time="2026-04-16T02:36:22.815687530Z" level=info msg="StartContainer for \"a5c9960af4e6dd405bcceaad0c6325605f82289b17fd46d03288fa048c8af11f\"" Apr 16 02:36:22.816482 containerd[1578]: time="2026-04-16T02:36:22.816444510Z" level=info msg="connecting to shim a5c9960af4e6dd405bcceaad0c6325605f82289b17fd46d03288fa048c8af11f" address="unix:///run/containerd/s/3b19271b1e71d33de8cf3d41dc5ca6f9dbd241170c30296653d274b4a36f2442" protocol=ttrpc version=3 Apr 16 02:36:22.818341 containerd[1578]: time="2026-04-16T02:36:22.818317661Z" level=info msg="CreateContainer within sandbox \"938cc3606638fa1b47db6182e5558027e4f6e760f9af4454531ab1cfee375b81\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b2b1cd90be5b87a0c404caff8a32dec5837620905273d77b8a63155fe6b5a528\"" Apr 16 02:36:22.818658 containerd[1578]: time="2026-04-16T02:36:22.818636891Z" level=info msg="StartContainer for \"b2b1cd90be5b87a0c404caff8a32dec5837620905273d77b8a63155fe6b5a528\"" Apr 16 02:36:22.819315 containerd[1578]: time="2026-04-16T02:36:22.819297486Z" level=info msg="connecting to shim b2b1cd90be5b87a0c404caff8a32dec5837620905273d77b8a63155fe6b5a528" address="unix:///run/containerd/s/cb974e89c93ad2268ad1e044f86964ba35441a14ee562a6adc50b1b4951a1199" protocol=ttrpc version=3 Apr 16 02:36:22.820985 containerd[1578]: time="2026-04-16T02:36:22.820961918Z" level=info msg="CreateContainer within sandbox \"669660f7f997ef8315a858e3f6cf7239f59b0e180f7164e95480e151c0b9f737\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0675dcfe16222e761f990f61e1276c976c71f861b07bd01c992852b7413ef4e\"" Apr 16 02:36:22.822100 containerd[1578]: time="2026-04-16T02:36:22.821347913Z" level=info msg="StartContainer for \"f0675dcfe16222e761f990f61e1276c976c71f861b07bd01c992852b7413ef4e\"" Apr 16 02:36:22.822100 containerd[1578]: time="2026-04-16T02:36:22.822021728Z" level=info msg="connecting to shim f0675dcfe16222e761f990f61e1276c976c71f861b07bd01c992852b7413ef4e" address="unix:///run/containerd/s/e0e3f3ecb4d5346611a05fce8521e741160fd57f8ba9c9be4efc2e4aed88df26" protocol=ttrpc version=3 Apr 16 02:36:22.832275 systemd[1]: Started cri-containerd-a5c9960af4e6dd405bcceaad0c6325605f82289b17fd46d03288fa048c8af11f.scope - libcontainer container a5c9960af4e6dd405bcceaad0c6325605f82289b17fd46d03288fa048c8af11f. Apr 16 02:36:22.835323 systemd[1]: Started cri-containerd-b2b1cd90be5b87a0c404caff8a32dec5837620905273d77b8a63155fe6b5a528.scope - libcontainer container b2b1cd90be5b87a0c404caff8a32dec5837620905273d77b8a63155fe6b5a528. Apr 16 02:36:22.836004 systemd[1]: Started cri-containerd-f0675dcfe16222e761f990f61e1276c976c71f861b07bd01c992852b7413ef4e.scope - libcontainer container f0675dcfe16222e761f990f61e1276c976c71f861b07bd01c992852b7413ef4e. Apr 16 02:36:22.875024 containerd[1578]: time="2026-04-16T02:36:22.874993581Z" level=info msg="StartContainer for \"a5c9960af4e6dd405bcceaad0c6325605f82289b17fd46d03288fa048c8af11f\" returns successfully" Apr 16 02:36:22.891853 containerd[1578]: time="2026-04-16T02:36:22.891775842Z" level=info msg="StartContainer for \"f0675dcfe16222e761f990f61e1276c976c71f861b07bd01c992852b7413ef4e\" returns successfully" Apr 16 02:36:22.893692 containerd[1578]: time="2026-04-16T02:36:22.893649718Z" level=info msg="StartContainer for \"b2b1cd90be5b87a0c404caff8a32dec5837620905273d77b8a63155fe6b5a528\" returns successfully" Apr 16 02:36:22.901304 kubelet[2361]: E0416 02:36:22.901263 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:22.901395 kubelet[2361]: E0416 02:36:22.901365 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.904680 kubelet[2361]: E0416 02:36:22.904651 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:22.905889 kubelet[2361]: E0416 02:36:22.904740 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:22.905889 kubelet[2361]: E0416 02:36:22.905671 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:22.905889 kubelet[2361]: E0416 02:36:22.905733 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:23.425574 kubelet[2361]: I0416 02:36:23.425526 2361 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:36:23.575930 kubelet[2361]: E0416 02:36:23.575879 2361 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 02:36:23.681249 kubelet[2361]: I0416 02:36:23.680389 2361 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 02:36:23.681249 kubelet[2361]: E0416 02:36:23.680425 2361 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 02:36:23.696172 kubelet[2361]: E0416 02:36:23.696101 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:23.797156 kubelet[2361]: E0416 02:36:23.797037 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:23.897759 kubelet[2361]: E0416 02:36:23.897698 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:23.907663 kubelet[2361]: E0416 02:36:23.907641 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:23.907749 kubelet[2361]: E0416 02:36:23.907736 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:23.907780 kubelet[2361]: E0416 02:36:23.907769 2361 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 02:36:23.907893 kubelet[2361]: E0416 02:36:23.907866 2361 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:23.998775 kubelet[2361]: E0416 02:36:23.998615 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.099618 kubelet[2361]: E0416 02:36:24.099555 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.200942 kubelet[2361]: E0416 02:36:24.200730 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.301920 kubelet[2361]: E0416 02:36:24.301765 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.402225 kubelet[2361]: E0416 02:36:24.402006 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.502214 kubelet[2361]: E0416 02:36:24.502164 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.602936 kubelet[2361]: E0416 02:36:24.602772 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.703686 kubelet[2361]: E0416 02:36:24.703625 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.804236 kubelet[2361]: E0416 02:36:24.804183 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:24.905395 kubelet[2361]: E0416 02:36:24.905273 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:25.006154 kubelet[2361]: E0416 02:36:25.006090 2361 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 02:36:25.080933 kubelet[2361]: I0416 02:36:25.080877 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:25.088670 kubelet[2361]: I0416 02:36:25.088648 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:25.091692 kubelet[2361]: I0416 02:36:25.091673 2361 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:36:25.336798 systemd[1]: Reload requested from client PID 2651 ('systemctl') (unit session-7.scope)... Apr 16 02:36:25.336832 systemd[1]: Reloading... Apr 16 02:36:25.392171 zram_generator::config[2691]: No configuration found. Apr 16 02:36:25.542100 systemd[1]: Reloading finished in 205 ms. Apr 16 02:36:25.565115 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:36:25.590897 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 02:36:25.591114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:36:25.591180 systemd[1]: kubelet.service: Consumed 731ms CPU time, 125.1M memory peak. Apr 16 02:36:25.592386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 02:36:25.712440 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 02:36:25.719388 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 02:36:25.752767 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 02:36:25.752767 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 02:36:25.752767 kubelet[2739]: I0416 02:36:25.752770 2739 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 02:36:25.757150 kubelet[2739]: I0416 02:36:25.757096 2739 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 02:36:25.757150 kubelet[2739]: I0416 02:36:25.757119 2739 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 02:36:25.757218 kubelet[2739]: I0416 02:36:25.757155 2739 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 02:36:25.757218 kubelet[2739]: I0416 02:36:25.757159 2739 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 02:36:25.757295 kubelet[2739]: I0416 02:36:25.757281 2739 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 02:36:25.758092 kubelet[2739]: I0416 02:36:25.758061 2739 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 02:36:25.760810 kubelet[2739]: I0416 02:36:25.760783 2739 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 02:36:25.766159 kubelet[2739]: I0416 02:36:25.765224 2739 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 02:36:25.767572 kubelet[2739]: I0416 02:36:25.767553 2739 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 02:36:25.767866 kubelet[2739]: I0416 02:36:25.767803 2739 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 02:36:25.768018 kubelet[2739]: I0416 02:36:25.767868 2739 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 02:36:25.768095 kubelet[2739]: I0416 02:36:25.768038 2739 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 02:36:25.768095 kubelet[2739]: I0416 02:36:25.768044 2739 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 02:36:25.768095 kubelet[2739]: I0416 02:36:25.768065 2739 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 02:36:25.768253 kubelet[2739]: I0416 02:36:25.768240 2739 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:36:25.768362 kubelet[2739]: I0416 02:36:25.768351 2739 kubelet.go:475] "Attempting to sync node with API server" Apr 16 02:36:25.768381 kubelet[2739]: I0416 02:36:25.768367 2739 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 02:36:25.768397 kubelet[2739]: I0416 02:36:25.768381 2739 kubelet.go:387] "Adding apiserver pod source" Apr 16 02:36:25.768397 kubelet[2739]: I0416 02:36:25.768391 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 02:36:25.769559 kubelet[2739]: I0416 02:36:25.769543 2739 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 02:36:25.770093 kubelet[2739]: I0416 02:36:25.770080 2739 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 02:36:25.770256 kubelet[2739]: I0416 02:36:25.770177 2739 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 02:36:25.773273 kubelet[2739]: I0416 02:36:25.772974 2739 server.go:1262] "Started kubelet" Apr 16 02:36:25.773889 kubelet[2739]: I0416 02:36:25.773862 2739 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 02:36:25.773938 kubelet[2739]: I0416 02:36:25.773899 2739 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 02:36:25.774094 kubelet[2739]: I0416 02:36:25.774073 2739 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 02:36:25.774203 kubelet[2739]: I0416 02:36:25.774154 2739 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 02:36:25.774839 kubelet[2739]: I0416 02:36:25.774782 2739 server.go:310] "Adding debug handlers to kubelet server" Apr 16 02:36:25.777389 kubelet[2739]: E0416 02:36:25.777367 2739 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 02:36:25.780509 kubelet[2739]: I0416 02:36:25.780484 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 02:36:25.781282 kubelet[2739]: I0416 02:36:25.781200 2739 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 02:36:25.781318 kubelet[2739]: I0416 02:36:25.781315 2739 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 02:36:25.781413 kubelet[2739]: I0416 02:36:25.781387 2739 reconciler.go:29] "Reconciler: start to sync state" Apr 16 02:36:25.781964 kubelet[2739]: I0416 02:36:25.781798 2739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 02:36:25.782701 kubelet[2739]: I0416 02:36:25.782659 2739 factory.go:223] Registration of the systemd container factory successfully Apr 16 02:36:25.782753 kubelet[2739]: I0416 02:36:25.782726 2739 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 02:36:25.784506 kubelet[2739]: I0416 02:36:25.784478 2739 factory.go:223] Registration of the containerd container factory successfully Apr 16 02:36:25.787645 kubelet[2739]: I0416 02:36:25.787049 2739 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 02:36:25.793913 kubelet[2739]: I0416 02:36:25.792838 2739 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 02:36:25.793913 kubelet[2739]: I0416 02:36:25.793911 2739 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 02:36:25.793986 kubelet[2739]: I0416 02:36:25.793930 2739 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 02:36:25.793986 kubelet[2739]: E0416 02:36:25.793961 2739 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 02:36:25.808890 kubelet[2739]: I0416 02:36:25.808847 2739 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 02:36:25.808890 kubelet[2739]: I0416 02:36:25.808866 2739 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 02:36:25.808890 kubelet[2739]: I0416 02:36:25.808879 2739 state_mem.go:36] "Initialized new in-memory state store" Apr 16 02:36:25.809009 kubelet[2739]: I0416 02:36:25.808953 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 02:36:25.809009 kubelet[2739]: I0416 02:36:25.808959 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 02:36:25.809009 kubelet[2739]: I0416 02:36:25.808970 2739 policy_none.go:49] "None policy: Start" Apr 16 02:36:25.809009 kubelet[2739]: I0416 02:36:25.808976 2739 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 02:36:25.809009 kubelet[2739]: I0416 02:36:25.808982 2739 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 02:36:25.809083 kubelet[2739]: I0416 02:36:25.809037 2739 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 02:36:25.809083 kubelet[2739]: I0416 02:36:25.809041 2739 policy_none.go:47] "Start" Apr 16 02:36:25.814299 kubelet[2739]: E0416 02:36:25.814269 2739 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 02:36:25.814404 kubelet[2739]: I0416 02:36:25.814393 2739 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 02:36:25.814578 kubelet[2739]: I0416 02:36:25.814407 2739 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 02:36:25.814578 kubelet[2739]: I0416 02:36:25.814539 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 02:36:25.816664 kubelet[2739]: E0416 02:36:25.816634 2739 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 02:36:25.895261 kubelet[2739]: I0416 02:36:25.895165 2739 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:25.895261 kubelet[2739]: I0416 02:36:25.895221 2739 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:25.895384 kubelet[2739]: I0416 02:36:25.895307 2739 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:36:25.901817 kubelet[2739]: E0416 02:36:25.901709 2739 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:25.901817 kubelet[2739]: E0416 02:36:25.901792 2739 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 02:36:25.901974 kubelet[2739]: E0416 02:36:25.901870 2739 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:25.920944 kubelet[2739]: I0416 02:36:25.920908 2739 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 16 02:36:25.926368 kubelet[2739]: I0416 02:36:25.926349 2739 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 16 02:36:25.926453 kubelet[2739]: I0416 02:36:25.926403 2739 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 16 02:36:25.983392 kubelet[2739]: I0416 02:36:25.983327 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 16 02:36:25.983392 kubelet[2739]: I0416 02:36:25.983353 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42ae3a1e9a61a348c94b728e90bcebdb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"42ae3a1e9a61a348c94b728e90bcebdb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:25.983392 kubelet[2739]: I0416 02:36:25.983368 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42ae3a1e9a61a348c94b728e90bcebdb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"42ae3a1e9a61a348c94b728e90bcebdb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:25.983392 kubelet[2739]: I0416 02:36:25.983380 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:25.983392 kubelet[2739]: I0416 02:36:25.983391 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:25.983591 kubelet[2739]: I0416 02:36:25.983403 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:25.983591 kubelet[2739]: I0416 02:36:25.983414 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42ae3a1e9a61a348c94b728e90bcebdb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"42ae3a1e9a61a348c94b728e90bcebdb\") " pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:25.983591 kubelet[2739]: I0416 02:36:25.983426 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:25.983591 kubelet[2739]: I0416 02:36:25.983437 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 02:36:26.202248 kubelet[2739]: E0416 02:36:26.202088 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:26.202248 kubelet[2739]: E0416 02:36:26.202088 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:26.202248 kubelet[2739]: E0416 02:36:26.202107 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:26.769042 kubelet[2739]: I0416 02:36:26.768961 2739 apiserver.go:52] "Watching apiserver" Apr 16 02:36:26.782472 kubelet[2739]: I0416 02:36:26.782432 2739 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 02:36:26.802953 kubelet[2739]: E0416 02:36:26.802909 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:26.803356 kubelet[2739]: I0416 02:36:26.803339 2739 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 02:36:26.803629 kubelet[2739]: I0416 02:36:26.803592 2739 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:26.809908 kubelet[2739]: E0416 02:36:26.809808 2739 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 16 02:36:26.810066 kubelet[2739]: E0416 02:36:26.809972 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:26.811283 kubelet[2739]: E0416 02:36:26.811241 2739 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 16 02:36:26.811338 kubelet[2739]: E0416 02:36:26.811324 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:26.819623 kubelet[2739]: I0416 02:36:26.819584 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.819117213 podStartE2EDuration="1.819117213s" podCreationTimestamp="2026-04-16 02:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:36:26.819016448 +0000 UTC m=+1.096910355" watchObservedRunningTime="2026-04-16 02:36:26.819117213 +0000 UTC m=+1.097011112" Apr 16 02:36:26.829653 kubelet[2739]: I0416 02:36:26.829592 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.829354677 podStartE2EDuration="1.829354677s" podCreationTimestamp="2026-04-16 02:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:36:26.823822679 +0000 UTC m=+1.101716586" watchObservedRunningTime="2026-04-16 02:36:26.829354677 +0000 UTC m=+1.107248587" Apr 16 02:36:26.835257 kubelet[2739]: I0416 02:36:26.835200 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.835187707 podStartE2EDuration="1.835187707s" podCreationTimestamp="2026-04-16 02:36:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:36:26.829672947 +0000 UTC m=+1.107566846" watchObservedRunningTime="2026-04-16 02:36:26.835187707 +0000 UTC m=+1.113081623" Apr 16 02:36:27.804941 kubelet[2739]: E0416 02:36:27.804898 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:27.804941 kubelet[2739]: E0416 02:36:27.804936 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:28.805659 kubelet[2739]: E0416 02:36:28.805609 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:31.348947 kubelet[2739]: I0416 02:36:31.348903 2739 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 02:36:31.349300 containerd[1578]: time="2026-04-16T02:36:31.349196901Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 02:36:31.349431 kubelet[2739]: I0416 02:36:31.349365 2739 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 02:36:32.414279 systemd[1]: Created slice kubepods-besteffort-pod7d1e3987_544e_4c68_9695_ab2c0c88e43b.slice - libcontainer container kubepods-besteffort-pod7d1e3987_544e_4c68_9695_ab2c0c88e43b.slice. Apr 16 02:36:32.423786 kubelet[2739]: I0416 02:36:32.423725 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rqv\" (UniqueName: \"kubernetes.io/projected/7d1e3987-544e-4c68-9695-ab2c0c88e43b-kube-api-access-t4rqv\") pod \"kube-proxy-9j2hm\" (UID: \"7d1e3987-544e-4c68-9695-ab2c0c88e43b\") " pod="kube-system/kube-proxy-9j2hm" Apr 16 02:36:32.423786 kubelet[2739]: I0416 02:36:32.423758 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d1e3987-544e-4c68-9695-ab2c0c88e43b-xtables-lock\") pod \"kube-proxy-9j2hm\" (UID: \"7d1e3987-544e-4c68-9695-ab2c0c88e43b\") " pod="kube-system/kube-proxy-9j2hm" Apr 16 02:36:32.423786 kubelet[2739]: I0416 02:36:32.423771 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d1e3987-544e-4c68-9695-ab2c0c88e43b-lib-modules\") pod \"kube-proxy-9j2hm\" (UID: \"7d1e3987-544e-4c68-9695-ab2c0c88e43b\") " pod="kube-system/kube-proxy-9j2hm" Apr 16 02:36:32.423786 kubelet[2739]: I0416 02:36:32.423783 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d1e3987-544e-4c68-9695-ab2c0c88e43b-kube-proxy\") pod \"kube-proxy-9j2hm\" (UID: \"7d1e3987-544e-4c68-9695-ab2c0c88e43b\") " pod="kube-system/kube-proxy-9j2hm" Apr 16 02:36:32.522182 systemd[1]: Created slice kubepods-besteffort-pod8edb6aba_3b7d_48e6_ba13_331af771732d.slice - libcontainer container kubepods-besteffort-pod8edb6aba_3b7d_48e6_ba13_331af771732d.slice. Apr 16 02:36:32.524274 kubelet[2739]: I0416 02:36:32.524251 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxm4\" (UniqueName: \"kubernetes.io/projected/8edb6aba-3b7d-48e6-ba13-331af771732d-kube-api-access-6bxm4\") pod \"tigera-operator-5588576f44-4jd2v\" (UID: \"8edb6aba-3b7d-48e6-ba13-331af771732d\") " pod="tigera-operator/tigera-operator-5588576f44-4jd2v" Apr 16 02:36:32.524350 kubelet[2739]: I0416 02:36:32.524284 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8edb6aba-3b7d-48e6-ba13-331af771732d-var-lib-calico\") pod \"tigera-operator-5588576f44-4jd2v\" (UID: \"8edb6aba-3b7d-48e6-ba13-331af771732d\") " pod="tigera-operator/tigera-operator-5588576f44-4jd2v" Apr 16 02:36:32.535474 kubelet[2739]: E0416 02:36:32.535440 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:32.722828 kubelet[2739]: E0416 02:36:32.722709 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:32.723634 containerd[1578]: time="2026-04-16T02:36:32.723250626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9j2hm,Uid:7d1e3987-544e-4c68-9695-ab2c0c88e43b,Namespace:kube-system,Attempt:0,}" Apr 16 02:36:32.737862 containerd[1578]: time="2026-04-16T02:36:32.737827151Z" level=info msg="connecting to shim 5b3d6da824876544ac5732849cc6d31be8c9955740849bc12d3ea2601e5c2dd3" address="unix:///run/containerd/s/d18ed2ffba40388bb01e951f5317d4871c58d41174a887cc25e68d5d3c112875" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:32.757336 systemd[1]: Started cri-containerd-5b3d6da824876544ac5732849cc6d31be8c9955740849bc12d3ea2601e5c2dd3.scope - libcontainer container 5b3d6da824876544ac5732849cc6d31be8c9955740849bc12d3ea2601e5c2dd3. Apr 16 02:36:32.775964 containerd[1578]: time="2026-04-16T02:36:32.775922931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9j2hm,Uid:7d1e3987-544e-4c68-9695-ab2c0c88e43b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b3d6da824876544ac5732849cc6d31be8c9955740849bc12d3ea2601e5c2dd3\"" Apr 16 02:36:32.776676 kubelet[2739]: E0416 02:36:32.776657 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:32.780670 containerd[1578]: time="2026-04-16T02:36:32.780425744Z" level=info msg="CreateContainer within sandbox \"5b3d6da824876544ac5732849cc6d31be8c9955740849bc12d3ea2601e5c2dd3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 02:36:32.787500 containerd[1578]: time="2026-04-16T02:36:32.787476169Z" level=info msg="Container 62cacf0cc013523a4c462372864029ae99fe6b6f5993fedfc5143a0f51767f0f: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:32.792927 containerd[1578]: time="2026-04-16T02:36:32.792892702Z" level=info msg="CreateContainer within sandbox \"5b3d6da824876544ac5732849cc6d31be8c9955740849bc12d3ea2601e5c2dd3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62cacf0cc013523a4c462372864029ae99fe6b6f5993fedfc5143a0f51767f0f\"" Apr 16 02:36:32.793332 containerd[1578]: time="2026-04-16T02:36:32.793311011Z" level=info msg="StartContainer for \"62cacf0cc013523a4c462372864029ae99fe6b6f5993fedfc5143a0f51767f0f\"" Apr 16 02:36:32.794261 containerd[1578]: time="2026-04-16T02:36:32.794226236Z" level=info msg="connecting to shim 62cacf0cc013523a4c462372864029ae99fe6b6f5993fedfc5143a0f51767f0f" address="unix:///run/containerd/s/d18ed2ffba40388bb01e951f5317d4871c58d41174a887cc25e68d5d3c112875" protocol=ttrpc version=3 Apr 16 02:36:32.813035 kubelet[2739]: E0416 02:36:32.813012 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:32.813259 systemd[1]: Started cri-containerd-62cacf0cc013523a4c462372864029ae99fe6b6f5993fedfc5143a0f51767f0f.scope - libcontainer container 62cacf0cc013523a4c462372864029ae99fe6b6f5993fedfc5143a0f51767f0f. Apr 16 02:36:32.827191 containerd[1578]: time="2026-04-16T02:36:32.827159357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-4jd2v,Uid:8edb6aba-3b7d-48e6-ba13-331af771732d,Namespace:tigera-operator,Attempt:0,}" Apr 16 02:36:32.839403 containerd[1578]: time="2026-04-16T02:36:32.839351273Z" level=info msg="connecting to shim 0baaa8d21ea3900c49bf1eea5fa2e59c9e7d5f1c866f0967c24313eec9870fc4" address="unix:///run/containerd/s/fd1623897e3a26a8e2b1d609591a152f821315d64f7b22cb698d52316365dc7f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:32.859738 containerd[1578]: time="2026-04-16T02:36:32.859073183Z" level=info msg="StartContainer for \"62cacf0cc013523a4c462372864029ae99fe6b6f5993fedfc5143a0f51767f0f\" returns successfully" Apr 16 02:36:32.863435 systemd[1]: Started cri-containerd-0baaa8d21ea3900c49bf1eea5fa2e59c9e7d5f1c866f0967c24313eec9870fc4.scope - libcontainer container 0baaa8d21ea3900c49bf1eea5fa2e59c9e7d5f1c866f0967c24313eec9870fc4. Apr 16 02:36:32.898974 containerd[1578]: time="2026-04-16T02:36:32.898897824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-4jd2v,Uid:8edb6aba-3b7d-48e6-ba13-331af771732d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0baaa8d21ea3900c49bf1eea5fa2e59c9e7d5f1c866f0967c24313eec9870fc4\"" Apr 16 02:36:32.902591 containerd[1578]: time="2026-04-16T02:36:32.902555660Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 02:36:33.815535 kubelet[2739]: E0416 02:36:33.815505 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:33.822750 kubelet[2739]: I0416 02:36:33.822699 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9j2hm" podStartSLOduration=1.8226810869999999 podStartE2EDuration="1.822681087s" podCreationTimestamp="2026-04-16 02:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:36:33.822502917 +0000 UTC m=+8.100396826" watchObservedRunningTime="2026-04-16 02:36:33.822681087 +0000 UTC m=+8.100574997" Apr 16 02:36:34.059031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061193204.mount: Deactivated successfully. Apr 16 02:36:34.509188 containerd[1578]: time="2026-04-16T02:36:34.509098808Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:34.509767 containerd[1578]: time="2026-04-16T02:36:34.509727109Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 02:36:34.510349 containerd[1578]: time="2026-04-16T02:36:34.510298688Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:34.511751 containerd[1578]: time="2026-04-16T02:36:34.511719832Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:34.512073 containerd[1578]: time="2026-04-16T02:36:34.512039355Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.609461541s" Apr 16 02:36:34.512073 containerd[1578]: time="2026-04-16T02:36:34.512069213Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 02:36:34.516797 containerd[1578]: time="2026-04-16T02:36:34.516754196Z" level=info msg="CreateContainer within sandbox \"0baaa8d21ea3900c49bf1eea5fa2e59c9e7d5f1c866f0967c24313eec9870fc4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 02:36:34.522246 containerd[1578]: time="2026-04-16T02:36:34.522219967Z" level=info msg="Container 925dc03c16a53c2ad3403a1f0f809bf099453e0deb61c7339d218e45765e7f91: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:34.528364 containerd[1578]: time="2026-04-16T02:36:34.528338443Z" level=info msg="CreateContainer within sandbox \"0baaa8d21ea3900c49bf1eea5fa2e59c9e7d5f1c866f0967c24313eec9870fc4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"925dc03c16a53c2ad3403a1f0f809bf099453e0deb61c7339d218e45765e7f91\"" Apr 16 02:36:34.528779 containerd[1578]: time="2026-04-16T02:36:34.528754743Z" level=info msg="StartContainer for \"925dc03c16a53c2ad3403a1f0f809bf099453e0deb61c7339d218e45765e7f91\"" Apr 16 02:36:34.529316 containerd[1578]: time="2026-04-16T02:36:34.529298602Z" level=info msg="connecting to shim 925dc03c16a53c2ad3403a1f0f809bf099453e0deb61c7339d218e45765e7f91" address="unix:///run/containerd/s/fd1623897e3a26a8e2b1d609591a152f821315d64f7b22cb698d52316365dc7f" protocol=ttrpc version=3 Apr 16 02:36:34.544331 systemd[1]: Started cri-containerd-925dc03c16a53c2ad3403a1f0f809bf099453e0deb61c7339d218e45765e7f91.scope - libcontainer container 925dc03c16a53c2ad3403a1f0f809bf099453e0deb61c7339d218e45765e7f91. Apr 16 02:36:34.567553 containerd[1578]: time="2026-04-16T02:36:34.567502230Z" level=info msg="StartContainer for \"925dc03c16a53c2ad3403a1f0f809bf099453e0deb61c7339d218e45765e7f91\" returns successfully" Apr 16 02:36:34.820031 kubelet[2739]: E0416 02:36:34.819977 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:34.842178 kubelet[2739]: E0416 02:36:34.842107 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:34.853561 kubelet[2739]: I0416 02:36:34.853444 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-4jd2v" podStartSLOduration=1.241064624 podStartE2EDuration="2.853420763s" podCreationTimestamp="2026-04-16 02:36:32 +0000 UTC" firstStartedPulling="2026-04-16 02:36:32.900363254 +0000 UTC m=+7.178257161" lastFinishedPulling="2026-04-16 02:36:34.512719401 +0000 UTC m=+8.790613300" observedRunningTime="2026-04-16 02:36:34.828965282 +0000 UTC m=+9.106859192" watchObservedRunningTime="2026-04-16 02:36:34.853420763 +0000 UTC m=+9.131314673" Apr 16 02:36:35.823586 kubelet[2739]: E0416 02:36:35.823530 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:36.178663 kubelet[2739]: E0416 02:36:36.178558 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:36.826886 kubelet[2739]: E0416 02:36:36.826831 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:37.827995 kubelet[2739]: E0416 02:36:37.827944 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:39.284889 sudo[1790]: pam_unix(sudo:session): session closed for user root Apr 16 02:36:39.287003 sshd[1789]: Connection closed by 10.0.0.1 port 48308 Apr 16 02:36:39.288428 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Apr 16 02:36:39.291770 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:48308.service: Deactivated successfully. Apr 16 02:36:39.293358 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 02:36:39.293583 systemd[1]: session-7.scope: Consumed 4.533s CPU time, 226.2M memory peak. Apr 16 02:36:39.294508 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Apr 16 02:36:39.296974 systemd-logind[1564]: Removed session 7. Apr 16 02:36:40.634218 systemd[1]: Created slice kubepods-besteffort-podda247a41_eccb_4af1_a534_4a00f07feabe.slice - libcontainer container kubepods-besteffort-podda247a41_eccb_4af1_a534_4a00f07feabe.slice. Apr 16 02:36:40.674221 kubelet[2739]: I0416 02:36:40.674062 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfr7r\" (UniqueName: \"kubernetes.io/projected/da247a41-eccb-4af1-a534-4a00f07feabe-kube-api-access-hfr7r\") pod \"calico-typha-6bfcff4f56-6chg8\" (UID: \"da247a41-eccb-4af1-a534-4a00f07feabe\") " pod="calico-system/calico-typha-6bfcff4f56-6chg8" Apr 16 02:36:40.674909 kubelet[2739]: I0416 02:36:40.674838 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/da247a41-eccb-4af1-a534-4a00f07feabe-typha-certs\") pod \"calico-typha-6bfcff4f56-6chg8\" (UID: \"da247a41-eccb-4af1-a534-4a00f07feabe\") " pod="calico-system/calico-typha-6bfcff4f56-6chg8" Apr 16 02:36:40.674963 kubelet[2739]: I0416 02:36:40.674869 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da247a41-eccb-4af1-a534-4a00f07feabe-tigera-ca-bundle\") pod \"calico-typha-6bfcff4f56-6chg8\" (UID: \"da247a41-eccb-4af1-a534-4a00f07feabe\") " pod="calico-system/calico-typha-6bfcff4f56-6chg8" Apr 16 02:36:40.676085 systemd[1]: Created slice kubepods-besteffort-pod54d5f42b_4ab4_47c8_a1f0_380e8806f584.slice - libcontainer container kubepods-besteffort-pod54d5f42b_4ab4_47c8_a1f0_380e8806f584.slice. Apr 16 02:36:40.775842 kubelet[2739]: I0416 02:36:40.775789 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-policysync\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.775842 kubelet[2739]: I0416 02:36:40.775829 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54d5f42b-4ab4-47c8-a1f0-380e8806f584-tigera-ca-bundle\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.775842 kubelet[2739]: I0416 02:36:40.775855 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-cni-log-dir\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776027 kubelet[2739]: I0416 02:36:40.775866 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-cni-net-dir\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776027 kubelet[2739]: I0416 02:36:40.775879 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vs4\" (UniqueName: \"kubernetes.io/projected/54d5f42b-4ab4-47c8-a1f0-380e8806f584-kube-api-access-z9vs4\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776027 kubelet[2739]: I0416 02:36:40.775931 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-sys-fs\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776027 kubelet[2739]: I0416 02:36:40.775943 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-bpffs\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776027 kubelet[2739]: I0416 02:36:40.775955 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-flexvol-driver-host\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776117 kubelet[2739]: I0416 02:36:40.775977 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-cni-bin-dir\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776117 kubelet[2739]: I0416 02:36:40.775989 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54d5f42b-4ab4-47c8-a1f0-380e8806f584-node-certs\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776117 kubelet[2739]: I0416 02:36:40.776003 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-nodeproc\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776117 kubelet[2739]: I0416 02:36:40.776015 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-var-run-calico\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776117 kubelet[2739]: I0416 02:36:40.776035 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-lib-modules\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776229 kubelet[2739]: I0416 02:36:40.776047 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-xtables-lock\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.776229 kubelet[2739]: I0416 02:36:40.776061 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54d5f42b-4ab4-47c8-a1f0-380e8806f584-var-lib-calico\") pod \"calico-node-d8dc2\" (UID: \"54d5f42b-4ab4-47c8-a1f0-380e8806f584\") " pod="calico-system/calico-node-d8dc2" Apr 16 02:36:40.782154 kubelet[2739]: E0416 02:36:40.781326 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:40.876724 kubelet[2739]: I0416 02:36:40.876679 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3833f640-bfff-4575-abe3-06fdc906d199-registration-dir\") pod \"csi-node-driver-m5mwv\" (UID: \"3833f640-bfff-4575-abe3-06fdc906d199\") " pod="calico-system/csi-node-driver-m5mwv" Apr 16 02:36:40.877667 kubelet[2739]: I0416 02:36:40.877183 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3833f640-bfff-4575-abe3-06fdc906d199-varrun\") pod \"csi-node-driver-m5mwv\" (UID: \"3833f640-bfff-4575-abe3-06fdc906d199\") " pod="calico-system/csi-node-driver-m5mwv" Apr 16 02:36:40.877667 kubelet[2739]: I0416 02:36:40.877209 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3833f640-bfff-4575-abe3-06fdc906d199-kubelet-dir\") pod \"csi-node-driver-m5mwv\" (UID: \"3833f640-bfff-4575-abe3-06fdc906d199\") " pod="calico-system/csi-node-driver-m5mwv" Apr 16 02:36:40.877667 kubelet[2739]: I0416 02:36:40.877224 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbbtr\" (UniqueName: \"kubernetes.io/projected/3833f640-bfff-4575-abe3-06fdc906d199-kube-api-access-bbbtr\") pod \"csi-node-driver-m5mwv\" (UID: \"3833f640-bfff-4575-abe3-06fdc906d199\") " pod="calico-system/csi-node-driver-m5mwv" Apr 16 02:36:40.877667 kubelet[2739]: I0416 02:36:40.877344 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3833f640-bfff-4575-abe3-06fdc906d199-socket-dir\") pod \"csi-node-driver-m5mwv\" (UID: \"3833f640-bfff-4575-abe3-06fdc906d199\") " pod="calico-system/csi-node-driver-m5mwv" Apr 16 02:36:40.880825 kubelet[2739]: E0416 02:36:40.880150 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.880825 kubelet[2739]: W0416 02:36:40.880165 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.880825 kubelet[2739]: E0416 02:36:40.880179 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.884090 kubelet[2739]: E0416 02:36:40.884073 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.884165 kubelet[2739]: W0416 02:36:40.884110 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.884165 kubelet[2739]: E0416 02:36:40.884147 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.941391 kubelet[2739]: E0416 02:36:40.941313 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:40.942121 containerd[1578]: time="2026-04-16T02:36:40.942086378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bfcff4f56-6chg8,Uid:da247a41-eccb-4af1-a534-4a00f07feabe,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:40.970030 containerd[1578]: time="2026-04-16T02:36:40.969535173Z" level=info msg="connecting to shim c7c329ff3f6589e9d4b74be75da3e8d198f08a706471de5a76c1380ba4959a5b" address="unix:///run/containerd/s/2648cd9f96d4570008e4fd5df83f5330437b062130d7252b8d926d8d791efe2a" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:40.978080 kubelet[2739]: E0416 02:36:40.978058 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.978080 kubelet[2739]: W0416 02:36:40.978076 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.978080 kubelet[2739]: E0416 02:36:40.978092 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.978414 kubelet[2739]: E0416 02:36:40.978400 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.978414 kubelet[2739]: W0416 02:36:40.978414 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.978453 kubelet[2739]: E0416 02:36:40.978422 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.978589 kubelet[2739]: E0416 02:36:40.978551 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.978589 kubelet[2739]: W0416 02:36:40.978561 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.978589 kubelet[2739]: E0416 02:36:40.978568 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.978707 kubelet[2739]: E0416 02:36:40.978692 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.978770 kubelet[2739]: W0416 02:36:40.978707 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.978770 kubelet[2739]: E0416 02:36:40.978713 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.978928 kubelet[2739]: E0416 02:36:40.978850 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.978928 kubelet[2739]: W0416 02:36:40.978865 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.978928 kubelet[2739]: E0416 02:36:40.978872 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.979090 kubelet[2739]: E0416 02:36:40.979048 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.979090 kubelet[2739]: W0416 02:36:40.979061 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.979090 kubelet[2739]: E0416 02:36:40.979067 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.979321 kubelet[2739]: E0416 02:36:40.979228 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.979321 kubelet[2739]: W0416 02:36:40.979233 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.979321 kubelet[2739]: E0416 02:36:40.979239 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.979409 kubelet[2739]: E0416 02:36:40.979370 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.979409 kubelet[2739]: W0416 02:36:40.979374 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.979409 kubelet[2739]: E0416 02:36:40.979379 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.979515 kubelet[2739]: E0416 02:36:40.979496 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.979515 kubelet[2739]: W0416 02:36:40.979501 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.979515 kubelet[2739]: E0416 02:36:40.979506 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.979687 kubelet[2739]: E0416 02:36:40.979614 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.979687 kubelet[2739]: W0416 02:36:40.979619 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.979687 kubelet[2739]: E0416 02:36:40.979625 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.980081 kubelet[2739]: E0416 02:36:40.980012 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.980081 kubelet[2739]: W0416 02:36:40.980022 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.980081 kubelet[2739]: E0416 02:36:40.980033 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.980208 kubelet[2739]: E0416 02:36:40.980190 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.980208 kubelet[2739]: W0416 02:36:40.980200 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.980208 kubelet[2739]: E0416 02:36:40.980205 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.980532 kubelet[2739]: E0416 02:36:40.980519 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.980532 kubelet[2739]: W0416 02:36:40.980531 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.980613 kubelet[2739]: E0416 02:36:40.980537 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.980728 kubelet[2739]: E0416 02:36:40.980717 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.980728 kubelet[2739]: W0416 02:36:40.980727 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.980781 kubelet[2739]: E0416 02:36:40.980733 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.980907 kubelet[2739]: E0416 02:36:40.980871 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.980907 kubelet[2739]: W0416 02:36:40.980883 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.980907 kubelet[2739]: E0416 02:36:40.980903 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.981064 kubelet[2739]: E0416 02:36:40.981048 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.981119 kubelet[2739]: W0416 02:36:40.981064 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.981119 kubelet[2739]: E0416 02:36:40.981077 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.981296 kubelet[2739]: E0416 02:36:40.981287 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.981296 kubelet[2739]: W0416 02:36:40.981294 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.981335 kubelet[2739]: E0416 02:36:40.981300 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.981453 kubelet[2739]: E0416 02:36:40.981442 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.981453 kubelet[2739]: W0416 02:36:40.981453 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.981489 kubelet[2739]: E0416 02:36:40.981461 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.981621 kubelet[2739]: E0416 02:36:40.981585 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.981621 kubelet[2739]: W0416 02:36:40.981594 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.981621 kubelet[2739]: E0416 02:36:40.981599 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.981752 kubelet[2739]: E0416 02:36:40.981741 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.981752 kubelet[2739]: W0416 02:36:40.981751 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.981819 kubelet[2739]: E0416 02:36:40.981757 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.981909 kubelet[2739]: E0416 02:36:40.981884 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.981909 kubelet[2739]: W0416 02:36:40.981907 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.981970 kubelet[2739]: E0416 02:36:40.981913 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.982082 kubelet[2739]: E0416 02:36:40.982052 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.982082 kubelet[2739]: W0416 02:36:40.982062 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.982082 kubelet[2739]: E0416 02:36:40.982067 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.982275 kubelet[2739]: E0416 02:36:40.982263 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.982275 kubelet[2739]: W0416 02:36:40.982274 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.982333 kubelet[2739]: E0416 02:36:40.982279 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.982403 kubelet[2739]: E0416 02:36:40.982393 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.982403 kubelet[2739]: W0416 02:36:40.982402 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.982466 kubelet[2739]: E0416 02:36:40.982409 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.982541 kubelet[2739]: E0416 02:36:40.982531 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.982541 kubelet[2739]: W0416 02:36:40.982540 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.982610 kubelet[2739]: E0416 02:36:40.982545 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:40.983497 containerd[1578]: time="2026-04-16T02:36:40.983379933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d8dc2,Uid:54d5f42b-4ab4-47c8-a1f0-380e8806f584,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:40.989674 kubelet[2739]: E0416 02:36:40.989652 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 02:36:40.989674 kubelet[2739]: W0416 02:36:40.989669 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 02:36:40.989849 kubelet[2739]: E0416 02:36:40.989683 2739 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 02:36:41.001070 containerd[1578]: time="2026-04-16T02:36:41.000957833Z" level=info msg="connecting to shim 33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8" address="unix:///run/containerd/s/edde5925ef096ba6e88b473ea4a6c473ae56592346f5fe213b9b85c6fedf24bf" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:41.005459 systemd[1]: Started cri-containerd-c7c329ff3f6589e9d4b74be75da3e8d198f08a706471de5a76c1380ba4959a5b.scope - libcontainer container c7c329ff3f6589e9d4b74be75da3e8d198f08a706471de5a76c1380ba4959a5b. Apr 16 02:36:41.024284 systemd[1]: Started cri-containerd-33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8.scope - libcontainer container 33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8. Apr 16 02:36:41.046918 containerd[1578]: time="2026-04-16T02:36:41.046866662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d8dc2,Uid:54d5f42b-4ab4-47c8-a1f0-380e8806f584,Namespace:calico-system,Attempt:0,} returns sandbox id \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\"" Apr 16 02:36:41.048881 containerd[1578]: time="2026-04-16T02:36:41.048812325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 02:36:41.049628 containerd[1578]: time="2026-04-16T02:36:41.049605332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bfcff4f56-6chg8,Uid:da247a41-eccb-4af1-a534-4a00f07feabe,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7c329ff3f6589e9d4b74be75da3e8d198f08a706471de5a76c1380ba4959a5b\"" Apr 16 02:36:41.050198 kubelet[2739]: E0416 02:36:41.050176 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:42.323495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369183261.mount: Deactivated successfully. Apr 16 02:36:42.378280 containerd[1578]: time="2026-04-16T02:36:42.378243524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:42.378836 containerd[1578]: time="2026-04-16T02:36:42.378815647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 16 02:36:42.379571 containerd[1578]: time="2026-04-16T02:36:42.379483784Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:42.381083 containerd[1578]: time="2026-04-16T02:36:42.381056269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:42.381493 containerd[1578]: time="2026-04-16T02:36:42.381470770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.332595025s" Apr 16 02:36:42.381532 containerd[1578]: time="2026-04-16T02:36:42.381497583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 02:36:42.382239 containerd[1578]: time="2026-04-16T02:36:42.382198649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 02:36:42.384509 containerd[1578]: time="2026-04-16T02:36:42.384484992Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 02:36:42.390062 containerd[1578]: time="2026-04-16T02:36:42.390038672Z" level=info msg="Container 11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:42.395563 containerd[1578]: time="2026-04-16T02:36:42.395537742Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0\"" Apr 16 02:36:42.395922 containerd[1578]: time="2026-04-16T02:36:42.395854518Z" level=info msg="StartContainer for \"11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0\"" Apr 16 02:36:42.396831 containerd[1578]: time="2026-04-16T02:36:42.396808663Z" level=info msg="connecting to shim 11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0" address="unix:///run/containerd/s/edde5925ef096ba6e88b473ea4a6c473ae56592346f5fe213b9b85c6fedf24bf" protocol=ttrpc version=3 Apr 16 02:36:42.412279 systemd[1]: Started cri-containerd-11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0.scope - libcontainer container 11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0. Apr 16 02:36:42.460882 containerd[1578]: time="2026-04-16T02:36:42.460816350Z" level=info msg="StartContainer for \"11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0\" returns successfully" Apr 16 02:36:42.465755 systemd[1]: cri-containerd-11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0.scope: Deactivated successfully. Apr 16 02:36:42.467909 containerd[1578]: time="2026-04-16T02:36:42.467863115Z" level=info msg="received container exit event container_id:\"11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0\" id:\"11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0\" pid:3305 exited_at:{seconds:1776307002 nanos:467486403}" Apr 16 02:36:42.789707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11afc44ce69b425f945479a23b401d10e04bf4801bd2324816202e525eb454d0-rootfs.mount: Deactivated successfully. Apr 16 02:36:42.794683 kubelet[2739]: E0416 02:36:42.794653 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:44.794982 kubelet[2739]: E0416 02:36:44.794946 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:45.029278 containerd[1578]: time="2026-04-16T02:36:45.029219850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:45.029836 containerd[1578]: time="2026-04-16T02:36:45.029813599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 16 02:36:45.030742 containerd[1578]: time="2026-04-16T02:36:45.030697260Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:45.032269 containerd[1578]: time="2026-04-16T02:36:45.032234174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:45.032717 containerd[1578]: time="2026-04-16T02:36:45.032680463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.650450796s" Apr 16 02:36:45.032753 containerd[1578]: time="2026-04-16T02:36:45.032716473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 02:36:45.033590 containerd[1578]: time="2026-04-16T02:36:45.033567817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 02:36:45.040486 containerd[1578]: time="2026-04-16T02:36:45.040461337Z" level=info msg="CreateContainer within sandbox \"c7c329ff3f6589e9d4b74be75da3e8d198f08a706471de5a76c1380ba4959a5b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 02:36:45.046241 containerd[1578]: time="2026-04-16T02:36:45.046162872Z" level=info msg="Container dc6e34f2c0b3abf8d056c4721f3e544035537d732dcfba6980749612759cec1f: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:45.052598 containerd[1578]: time="2026-04-16T02:36:45.052560174Z" level=info msg="CreateContainer within sandbox \"c7c329ff3f6589e9d4b74be75da3e8d198f08a706471de5a76c1380ba4959a5b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dc6e34f2c0b3abf8d056c4721f3e544035537d732dcfba6980749612759cec1f\"" Apr 16 02:36:45.052936 containerd[1578]: time="2026-04-16T02:36:45.052914259Z" level=info msg="StartContainer for \"dc6e34f2c0b3abf8d056c4721f3e544035537d732dcfba6980749612759cec1f\"" Apr 16 02:36:45.053717 containerd[1578]: time="2026-04-16T02:36:45.053693944Z" level=info msg="connecting to shim dc6e34f2c0b3abf8d056c4721f3e544035537d732dcfba6980749612759cec1f" address="unix:///run/containerd/s/2648cd9f96d4570008e4fd5df83f5330437b062130d7252b8d926d8d791efe2a" protocol=ttrpc version=3 Apr 16 02:36:45.068304 systemd[1]: Started cri-containerd-dc6e34f2c0b3abf8d056c4721f3e544035537d732dcfba6980749612759cec1f.scope - libcontainer container dc6e34f2c0b3abf8d056c4721f3e544035537d732dcfba6980749612759cec1f. Apr 16 02:36:45.110694 containerd[1578]: time="2026-04-16T02:36:45.110668663Z" level=info msg="StartContainer for \"dc6e34f2c0b3abf8d056c4721f3e544035537d732dcfba6980749612759cec1f\" returns successfully" Apr 16 02:36:45.847586 kubelet[2739]: E0416 02:36:45.847558 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:45.859661 kubelet[2739]: I0416 02:36:45.859493 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bfcff4f56-6chg8" podStartSLOduration=1.8766072569999999 podStartE2EDuration="5.859478271s" podCreationTimestamp="2026-04-16 02:36:40 +0000 UTC" firstStartedPulling="2026-04-16 02:36:41.050561517 +0000 UTC m=+15.328455416" lastFinishedPulling="2026-04-16 02:36:45.03343253 +0000 UTC m=+19.311326430" observedRunningTime="2026-04-16 02:36:45.859064427 +0000 UTC m=+20.136958330" watchObservedRunningTime="2026-04-16 02:36:45.859478271 +0000 UTC m=+20.137372185" Apr 16 02:36:46.795142 kubelet[2739]: E0416 02:36:46.795049 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:46.849361 kubelet[2739]: I0416 02:36:46.849293 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:36:46.849804 kubelet[2739]: E0416 02:36:46.849653 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:48.795018 kubelet[2739]: E0416 02:36:48.794952 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:49.062335 update_engine[1566]: I20260416 02:36:49.062192 1566 update_attempter.cc:509] Updating boot flags... Apr 16 02:36:50.360044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116212416.mount: Deactivated successfully. Apr 16 02:36:50.576744 containerd[1578]: time="2026-04-16T02:36:50.576686008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:50.577282 containerd[1578]: time="2026-04-16T02:36:50.577254058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 02:36:50.577943 containerd[1578]: time="2026-04-16T02:36:50.577878112Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:50.579612 containerd[1578]: time="2026-04-16T02:36:50.579566768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:50.580000 containerd[1578]: time="2026-04-16T02:36:50.579958274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 5.546363484s" Apr 16 02:36:50.580000 containerd[1578]: time="2026-04-16T02:36:50.579991898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 02:36:50.583119 containerd[1578]: time="2026-04-16T02:36:50.583091746Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 02:36:50.590316 containerd[1578]: time="2026-04-16T02:36:50.590278378Z" level=info msg="Container b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:50.623773 containerd[1578]: time="2026-04-16T02:36:50.623669843Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c\"" Apr 16 02:36:50.624238 containerd[1578]: time="2026-04-16T02:36:50.624211857Z" level=info msg="StartContainer for \"b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c\"" Apr 16 02:36:50.625561 containerd[1578]: time="2026-04-16T02:36:50.625513397Z" level=info msg="connecting to shim b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c" address="unix:///run/containerd/s/edde5925ef096ba6e88b473ea4a6c473ae56592346f5fe213b9b85c6fedf24bf" protocol=ttrpc version=3 Apr 16 02:36:50.644358 systemd[1]: Started cri-containerd-b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c.scope - libcontainer container b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c. Apr 16 02:36:50.704414 containerd[1578]: time="2026-04-16T02:36:50.704256937Z" level=info msg="StartContainer for \"b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c\" returns successfully" Apr 16 02:36:50.741838 systemd[1]: cri-containerd-b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c.scope: Deactivated successfully. Apr 16 02:36:50.749315 containerd[1578]: time="2026-04-16T02:36:50.749245125Z" level=info msg="received container exit event container_id:\"b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c\" id:\"b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c\" pid:3424 exited_at:{seconds:1776307010 nanos:742488004}" Apr 16 02:36:50.794705 kubelet[2739]: E0416 02:36:50.794604 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:50.860196 containerd[1578]: time="2026-04-16T02:36:50.859995678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 02:36:51.359918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8210db689e97094c1486019774c91fc4833ec11a60fb0fc880e9b093e6b2e8c-rootfs.mount: Deactivated successfully. Apr 16 02:36:52.795032 kubelet[2739]: E0416 02:36:52.794947 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:53.916615 containerd[1578]: time="2026-04-16T02:36:53.916569440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:53.917091 containerd[1578]: time="2026-04-16T02:36:53.917069174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 02:36:53.917836 containerd[1578]: time="2026-04-16T02:36:53.917810661Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:53.919654 containerd[1578]: time="2026-04-16T02:36:53.919606387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:53.920190 containerd[1578]: time="2026-04-16T02:36:53.920162925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.060139615s" Apr 16 02:36:53.920190 containerd[1578]: time="2026-04-16T02:36:53.920190761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 02:36:53.927466 containerd[1578]: time="2026-04-16T02:36:53.927440649Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 02:36:53.933141 containerd[1578]: time="2026-04-16T02:36:53.933090094Z" level=info msg="Container 513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:53.940200 containerd[1578]: time="2026-04-16T02:36:53.940158653Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d\"" Apr 16 02:36:53.940623 containerd[1578]: time="2026-04-16T02:36:53.940608424Z" level=info msg="StartContainer for \"513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d\"" Apr 16 02:36:53.941696 containerd[1578]: time="2026-04-16T02:36:53.941671233Z" level=info msg="connecting to shim 513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d" address="unix:///run/containerd/s/edde5925ef096ba6e88b473ea4a6c473ae56592346f5fe213b9b85c6fedf24bf" protocol=ttrpc version=3 Apr 16 02:36:53.956282 systemd[1]: Started cri-containerd-513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d.scope - libcontainer container 513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d. Apr 16 02:36:54.012142 containerd[1578]: time="2026-04-16T02:36:54.012069983Z" level=info msg="StartContainer for \"513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d\" returns successfully" Apr 16 02:36:54.398889 systemd[1]: cri-containerd-513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d.scope: Deactivated successfully. Apr 16 02:36:54.399396 systemd[1]: cri-containerd-513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d.scope: Consumed 413ms CPU time, 179.4M memory peak, 3.3M read from disk, 177M written to disk. Apr 16 02:36:54.400824 containerd[1578]: time="2026-04-16T02:36:54.400791438Z" level=info msg="received container exit event container_id:\"513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d\" id:\"513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d\" pid:3481 exited_at:{seconds:1776307014 nanos:400575507}" Apr 16 02:36:54.429212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-513a92ec7f672f51610b8d81d9438d7ce5872e3b811474cb2daec69da401046d-rootfs.mount: Deactivated successfully. Apr 16 02:36:54.482967 kubelet[2739]: I0416 02:36:54.482897 2739 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 02:36:54.521102 systemd[1]: Created slice kubepods-besteffort-pod8f7b7bfb_5c78_492d_a062_8ba6efa8d0a7.slice - libcontainer container kubepods-besteffort-pod8f7b7bfb_5c78_492d_a062_8ba6efa8d0a7.slice. Apr 16 02:36:54.530046 systemd[1]: Created slice kubepods-burstable-podab52d165_5aee_45ff_9e0b_6d96530041af.slice - libcontainer container kubepods-burstable-podab52d165_5aee_45ff_9e0b_6d96530041af.slice. Apr 16 02:36:54.536075 systemd[1]: Created slice kubepods-besteffort-pod39d14cd0_33da_4d8f_9341_376848f3ccb4.slice - libcontainer container kubepods-besteffort-pod39d14cd0_33da_4d8f_9341_376848f3ccb4.slice. Apr 16 02:36:54.544811 systemd[1]: Created slice kubepods-burstable-pod911acf9e_0354_4df7_acfe_04bcad66aad5.slice - libcontainer container kubepods-burstable-pod911acf9e_0354_4df7_acfe_04bcad66aad5.slice. Apr 16 02:36:54.549491 systemd[1]: Created slice kubepods-besteffort-pod35a1f297_9e71_48ed_a8c9_4d9ebba86e8f.slice - libcontainer container kubepods-besteffort-pod35a1f297_9e71_48ed_a8c9_4d9ebba86e8f.slice. Apr 16 02:36:54.553647 systemd[1]: Created slice kubepods-besteffort-pod7752b9e6_a845_431d_bcfd_61c0f25a7158.slice - libcontainer container kubepods-besteffort-pod7752b9e6_a845_431d_bcfd_61c0f25a7158.slice. Apr 16 02:36:54.558829 systemd[1]: Created slice kubepods-besteffort-pod338dbfb9_c6e4_4a9b_830f_eac73644e324.slice - libcontainer container kubepods-besteffort-pod338dbfb9_c6e4_4a9b_830f_eac73644e324.slice. Apr 16 02:36:54.578094 kubelet[2739]: I0416 02:36:54.578061 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/39d14cd0-33da-4d8f-9341-376848f3ccb4-calico-apiserver-certs\") pod \"calico-apiserver-5d54b44d94-vh4sj\" (UID: \"39d14cd0-33da-4d8f-9341-376848f3ccb4\") " pod="calico-system/calico-apiserver-5d54b44d94-vh4sj" Apr 16 02:36:54.578094 kubelet[2739]: I0416 02:36:54.578093 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhlzw\" (UniqueName: \"kubernetes.io/projected/ab52d165-5aee-45ff-9e0b-6d96530041af-kube-api-access-zhlzw\") pod \"coredns-66bc5c9577-gq4gt\" (UID: \"ab52d165-5aee-45ff-9e0b-6d96530041af\") " pod="kube-system/coredns-66bc5c9577-gq4gt" Apr 16 02:36:54.578094 kubelet[2739]: I0416 02:36:54.578109 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-ca-bundle\") pod \"whisker-68654f5884-znctg\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " pod="calico-system/whisker-68654f5884-znctg" Apr 16 02:36:54.578344 kubelet[2739]: I0416 02:36:54.578141 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz4k8\" (UniqueName: \"kubernetes.io/projected/35a1f297-9e71-48ed-a8c9-4d9ebba86e8f-kube-api-access-fz4k8\") pod \"calico-kube-controllers-5c6c55d9c-kstss\" (UID: \"35a1f297-9e71-48ed-a8c9-4d9ebba86e8f\") " pod="calico-system/calico-kube-controllers-5c6c55d9c-kstss" Apr 16 02:36:54.578344 kubelet[2739]: I0416 02:36:54.578204 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7-calico-apiserver-certs\") pod \"calico-apiserver-5d54b44d94-4wngf\" (UID: \"8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7\") " pod="calico-system/calico-apiserver-5d54b44d94-4wngf" Apr 16 02:36:54.578344 kubelet[2739]: I0416 02:36:54.578285 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sr7j\" (UniqueName: \"kubernetes.io/projected/39d14cd0-33da-4d8f-9341-376848f3ccb4-kube-api-access-4sr7j\") pod \"calico-apiserver-5d54b44d94-vh4sj\" (UID: \"39d14cd0-33da-4d8f-9341-376848f3ccb4\") " pod="calico-system/calico-apiserver-5d54b44d94-vh4sj" Apr 16 02:36:54.578344 kubelet[2739]: I0416 02:36:54.578304 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-nginx-config\") pod \"whisker-68654f5884-znctg\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " pod="calico-system/whisker-68654f5884-znctg" Apr 16 02:36:54.578468 kubelet[2739]: I0416 02:36:54.578377 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab52d165-5aee-45ff-9e0b-6d96530041af-config-volume\") pod \"coredns-66bc5c9577-gq4gt\" (UID: \"ab52d165-5aee-45ff-9e0b-6d96530041af\") " pod="kube-system/coredns-66bc5c9577-gq4gt" Apr 16 02:36:54.578468 kubelet[2739]: I0416 02:36:54.578395 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338dbfb9-c6e4-4a9b-830f-eac73644e324-config\") pod \"goldmane-cccfbd5cf-h7vtx\" (UID: \"338dbfb9-c6e4-4a9b-830f-eac73644e324\") " pod="calico-system/goldmane-cccfbd5cf-h7vtx" Apr 16 02:36:54.578468 kubelet[2739]: I0416 02:36:54.578407 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn889\" (UniqueName: \"kubernetes.io/projected/338dbfb9-c6e4-4a9b-830f-eac73644e324-kube-api-access-bn889\") pod \"goldmane-cccfbd5cf-h7vtx\" (UID: \"338dbfb9-c6e4-4a9b-830f-eac73644e324\") " pod="calico-system/goldmane-cccfbd5cf-h7vtx" Apr 16 02:36:54.578468 kubelet[2739]: I0416 02:36:54.578424 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpgs\" (UniqueName: \"kubernetes.io/projected/911acf9e-0354-4df7-acfe-04bcad66aad5-kube-api-access-ngpgs\") pod \"coredns-66bc5c9577-tln7v\" (UID: \"911acf9e-0354-4df7-acfe-04bcad66aad5\") " pod="kube-system/coredns-66bc5c9577-tln7v" Apr 16 02:36:54.578468 kubelet[2739]: I0416 02:36:54.578446 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-backend-key-pair\") pod \"whisker-68654f5884-znctg\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " pod="calico-system/whisker-68654f5884-znctg" Apr 16 02:36:54.578615 kubelet[2739]: I0416 02:36:54.578458 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxvc\" (UniqueName: \"kubernetes.io/projected/7752b9e6-a845-431d-bcfd-61c0f25a7158-kube-api-access-dpxvc\") pod \"whisker-68654f5884-znctg\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " pod="calico-system/whisker-68654f5884-znctg" Apr 16 02:36:54.578615 kubelet[2739]: I0416 02:36:54.578469 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/338dbfb9-c6e4-4a9b-830f-eac73644e324-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-h7vtx\" (UID: \"338dbfb9-c6e4-4a9b-830f-eac73644e324\") " pod="calico-system/goldmane-cccfbd5cf-h7vtx" Apr 16 02:36:54.578615 kubelet[2739]: I0416 02:36:54.578483 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sldnq\" (UniqueName: \"kubernetes.io/projected/8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7-kube-api-access-sldnq\") pod \"calico-apiserver-5d54b44d94-4wngf\" (UID: \"8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7\") " pod="calico-system/calico-apiserver-5d54b44d94-4wngf" Apr 16 02:36:54.578615 kubelet[2739]: I0416 02:36:54.578493 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/911acf9e-0354-4df7-acfe-04bcad66aad5-config-volume\") pod \"coredns-66bc5c9577-tln7v\" (UID: \"911acf9e-0354-4df7-acfe-04bcad66aad5\") " pod="kube-system/coredns-66bc5c9577-tln7v" Apr 16 02:36:54.578615 kubelet[2739]: I0416 02:36:54.578525 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338dbfb9-c6e4-4a9b-830f-eac73644e324-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-h7vtx\" (UID: \"338dbfb9-c6e4-4a9b-830f-eac73644e324\") " pod="calico-system/goldmane-cccfbd5cf-h7vtx" Apr 16 02:36:54.578763 kubelet[2739]: I0416 02:36:54.578548 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35a1f297-9e71-48ed-a8c9-4d9ebba86e8f-tigera-ca-bundle\") pod \"calico-kube-controllers-5c6c55d9c-kstss\" (UID: \"35a1f297-9e71-48ed-a8c9-4d9ebba86e8f\") " pod="calico-system/calico-kube-controllers-5c6c55d9c-kstss" Apr 16 02:36:54.799201 systemd[1]: Created slice kubepods-besteffort-pod3833f640_bfff_4575_abe3_06fdc906d199.slice - libcontainer container kubepods-besteffort-pod3833f640_bfff_4575_abe3_06fdc906d199.slice. Apr 16 02:36:54.804157 containerd[1578]: time="2026-04-16T02:36:54.804103620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m5mwv,Uid:3833f640-bfff-4575-abe3-06fdc906d199,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:54.827998 containerd[1578]: time="2026-04-16T02:36:54.827934151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-4wngf,Uid:8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:54.836960 kubelet[2739]: E0416 02:36:54.836878 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:54.838710 containerd[1578]: time="2026-04-16T02:36:54.838670899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gq4gt,Uid:ab52d165-5aee-45ff-9e0b-6d96530041af,Namespace:kube-system,Attempt:0,}" Apr 16 02:36:54.844849 containerd[1578]: time="2026-04-16T02:36:54.844770158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-vh4sj,Uid:39d14cd0-33da-4d8f-9341-376848f3ccb4,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:54.849642 kubelet[2739]: E0416 02:36:54.849493 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:36:54.850358 containerd[1578]: time="2026-04-16T02:36:54.850309831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tln7v,Uid:911acf9e-0354-4df7-acfe-04bcad66aad5,Namespace:kube-system,Attempt:0,}" Apr 16 02:36:54.856776 containerd[1578]: time="2026-04-16T02:36:54.856670824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c55d9c-kstss,Uid:35a1f297-9e71-48ed-a8c9-4d9ebba86e8f,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:54.858614 containerd[1578]: time="2026-04-16T02:36:54.858579016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68654f5884-znctg,Uid:7752b9e6-a845-431d-bcfd-61c0f25a7158,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:54.866118 containerd[1578]: time="2026-04-16T02:36:54.866073559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h7vtx,Uid:338dbfb9-c6e4-4a9b-830f-eac73644e324,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:54.895722 containerd[1578]: time="2026-04-16T02:36:54.894537868Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 02:36:54.965631 containerd[1578]: time="2026-04-16T02:36:54.965568122Z" level=info msg="Container 52519602fef4beebcf33380c4bf065402d07a1b45807a63e78f22c00b3ef3a6f: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:54.965822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665208916.mount: Deactivated successfully. Apr 16 02:36:54.978494 containerd[1578]: time="2026-04-16T02:36:54.978342275Z" level=error msg="Failed to destroy network for sandbox \"8025bbb7aaba41ae0857d8e018cf1675721c472abe5bff00ca9d42754ea5927d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:54.978794 containerd[1578]: time="2026-04-16T02:36:54.978647651Z" level=error msg="Failed to destroy network for sandbox \"477d761d913c625014737944fe138cc65f812772a3416c1a12caad7c76878428\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:54.980480 systemd[1]: run-netns-cni\x2d7034347d\x2d7e0e\x2d3f8a\x2d1c4e\x2d541df0401f89.mount: Deactivated successfully. Apr 16 02:36:54.980593 systemd[1]: run-netns-cni\x2dff7c4bc9\x2df9ce\x2dc523\x2d22f5\x2dbcf3dc982fc6.mount: Deactivated successfully. Apr 16 02:36:54.987790 containerd[1578]: time="2026-04-16T02:36:54.987707734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-4wngf,Uid:8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8025bbb7aaba41ae0857d8e018cf1675721c472abe5bff00ca9d42754ea5927d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:54.993262 containerd[1578]: time="2026-04-16T02:36:54.992754536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gq4gt,Uid:ab52d165-5aee-45ff-9e0b-6d96530041af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"477d761d913c625014737944fe138cc65f812772a3416c1a12caad7c76878428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:54.994868 containerd[1578]: time="2026-04-16T02:36:54.994819355Z" level=info msg="CreateContainer within sandbox \"33a7309628e02d58bfcf864491bfd14a9c3678b850cfbd274e597a3108f4d6d8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"52519602fef4beebcf33380c4bf065402d07a1b45807a63e78f22c00b3ef3a6f\"" Apr 16 02:36:54.996620 containerd[1578]: time="2026-04-16T02:36:54.996597456Z" level=error msg="Failed to destroy network for sandbox \"f114fc98f70f7978661a27d26cdaf43e440c036b56bde0670c96fbfe8ca6e6a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:54.997902 kubelet[2739]: E0416 02:36:54.997539 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477d761d913c625014737944fe138cc65f812772a3416c1a12caad7c76878428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:54.997902 kubelet[2739]: E0416 02:36:54.997598 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8025bbb7aaba41ae0857d8e018cf1675721c472abe5bff00ca9d42754ea5927d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:54.997902 kubelet[2739]: E0416 02:36:54.997698 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477d761d913c625014737944fe138cc65f812772a3416c1a12caad7c76878428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gq4gt" Apr 16 02:36:54.997902 kubelet[2739]: E0416 02:36:54.997722 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"477d761d913c625014737944fe138cc65f812772a3416c1a12caad7c76878428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gq4gt" Apr 16 02:36:54.998248 containerd[1578]: time="2026-04-16T02:36:54.997727366Z" level=info msg="StartContainer for \"52519602fef4beebcf33380c4bf065402d07a1b45807a63e78f22c00b3ef3a6f\"" Apr 16 02:36:54.998277 kubelet[2739]: E0416 02:36:54.997756 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8025bbb7aaba41ae0857d8e018cf1675721c472abe5bff00ca9d42754ea5927d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d54b44d94-4wngf" Apr 16 02:36:54.998277 kubelet[2739]: E0416 02:36:54.997765 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gq4gt_kube-system(ab52d165-5aee-45ff-9e0b-6d96530041af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gq4gt_kube-system(ab52d165-5aee-45ff-9e0b-6d96530041af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"477d761d913c625014737944fe138cc65f812772a3416c1a12caad7c76878428\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gq4gt" podUID="ab52d165-5aee-45ff-9e0b-6d96530041af" Apr 16 02:36:54.998277 kubelet[2739]: E0416 02:36:54.997776 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8025bbb7aaba41ae0857d8e018cf1675721c472abe5bff00ca9d42754ea5927d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d54b44d94-4wngf" Apr 16 02:36:54.998418 kubelet[2739]: E0416 02:36:54.997807 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d54b44d94-4wngf_calico-system(8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d54b44d94-4wngf_calico-system(8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8025bbb7aaba41ae0857d8e018cf1675721c472abe5bff00ca9d42754ea5927d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5d54b44d94-4wngf" podUID="8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7" Apr 16 02:36:54.999198 containerd[1578]: time="2026-04-16T02:36:54.999063705Z" level=info msg="connecting to shim 52519602fef4beebcf33380c4bf065402d07a1b45807a63e78f22c00b3ef3a6f" address="unix:///run/containerd/s/edde5925ef096ba6e88b473ea4a6c473ae56592346f5fe213b9b85c6fedf24bf" protocol=ttrpc version=3 Apr 16 02:36:55.004032 containerd[1578]: time="2026-04-16T02:36:55.003764278Z" level=error msg="Failed to destroy network for sandbox \"266f4f7391dd3be365586fdf7dc36c845646630d406d9de81bbc5317fe73c129\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.020218 containerd[1578]: time="2026-04-16T02:36:55.020180013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m5mwv,Uid:3833f640-bfff-4575-abe3-06fdc906d199,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f114fc98f70f7978661a27d26cdaf43e440c036b56bde0670c96fbfe8ca6e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.020971 containerd[1578]: time="2026-04-16T02:36:55.020932293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c55d9c-kstss,Uid:35a1f297-9e71-48ed-a8c9-4d9ebba86e8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"266f4f7391dd3be365586fdf7dc36c845646630d406d9de81bbc5317fe73c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.021156 kubelet[2739]: E0416 02:36:55.021101 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266f4f7391dd3be365586fdf7dc36c845646630d406d9de81bbc5317fe73c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.021592 kubelet[2739]: E0416 02:36:55.021554 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266f4f7391dd3be365586fdf7dc36c845646630d406d9de81bbc5317fe73c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c6c55d9c-kstss" Apr 16 02:36:55.021626 kubelet[2739]: E0416 02:36:55.021590 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266f4f7391dd3be365586fdf7dc36c845646630d406d9de81bbc5317fe73c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c6c55d9c-kstss" Apr 16 02:36:55.021661 kubelet[2739]: E0416 02:36:55.021633 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c6c55d9c-kstss_calico-system(35a1f297-9e71-48ed-a8c9-4d9ebba86e8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c6c55d9c-kstss_calico-system(35a1f297-9e71-48ed-a8c9-4d9ebba86e8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"266f4f7391dd3be365586fdf7dc36c845646630d406d9de81bbc5317fe73c129\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c6c55d9c-kstss" podUID="35a1f297-9e71-48ed-a8c9-4d9ebba86e8f" Apr 16 02:36:55.022337 kubelet[2739]: E0416 02:36:55.022314 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f114fc98f70f7978661a27d26cdaf43e440c036b56bde0670c96fbfe8ca6e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.022390 kubelet[2739]: E0416 02:36:55.022350 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f114fc98f70f7978661a27d26cdaf43e440c036b56bde0670c96fbfe8ca6e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m5mwv" Apr 16 02:36:55.022390 kubelet[2739]: E0416 02:36:55.022366 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f114fc98f70f7978661a27d26cdaf43e440c036b56bde0670c96fbfe8ca6e6a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m5mwv" Apr 16 02:36:55.022508 kubelet[2739]: E0416 02:36:55.022415 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m5mwv_calico-system(3833f640-bfff-4575-abe3-06fdc906d199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m5mwv_calico-system(3833f640-bfff-4575-abe3-06fdc906d199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f114fc98f70f7978661a27d26cdaf43e440c036b56bde0670c96fbfe8ca6e6a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m5mwv" podUID="3833f640-bfff-4575-abe3-06fdc906d199" Apr 16 02:36:55.023617 containerd[1578]: time="2026-04-16T02:36:55.023592265Z" level=error msg="Failed to destroy network for sandbox \"b7039f695d02bebc21a4ef85e74c896fdb1fc5c4459e3d8ce1a7530d599e44b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.025525 containerd[1578]: time="2026-04-16T02:36:55.025288514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h7vtx,Uid:338dbfb9-c6e4-4a9b-830f-eac73644e324,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7039f695d02bebc21a4ef85e74c896fdb1fc5c4459e3d8ce1a7530d599e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.025620 kubelet[2739]: E0416 02:36:55.025452 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7039f695d02bebc21a4ef85e74c896fdb1fc5c4459e3d8ce1a7530d599e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.025620 kubelet[2739]: E0416 02:36:55.025504 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7039f695d02bebc21a4ef85e74c896fdb1fc5c4459e3d8ce1a7530d599e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-h7vtx" Apr 16 02:36:55.025620 kubelet[2739]: E0416 02:36:55.025523 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7039f695d02bebc21a4ef85e74c896fdb1fc5c4459e3d8ce1a7530d599e44b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-h7vtx" Apr 16 02:36:55.025358 systemd[1]: Started cri-containerd-52519602fef4beebcf33380c4bf065402d07a1b45807a63e78f22c00b3ef3a6f.scope - libcontainer container 52519602fef4beebcf33380c4bf065402d07a1b45807a63e78f22c00b3ef3a6f. Apr 16 02:36:55.025776 kubelet[2739]: E0416 02:36:55.025557 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-h7vtx_calico-system(338dbfb9-c6e4-4a9b-830f-eac73644e324)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-h7vtx_calico-system(338dbfb9-c6e4-4a9b-830f-eac73644e324)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7039f695d02bebc21a4ef85e74c896fdb1fc5c4459e3d8ce1a7530d599e44b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-h7vtx" podUID="338dbfb9-c6e4-4a9b-830f-eac73644e324" Apr 16 02:36:55.028409 containerd[1578]: time="2026-04-16T02:36:55.028361360Z" level=error msg="Failed to destroy network for sandbox \"f61a2858433ef28cd79a8fe9af6e53468c176321ac2e84e958ba35a3229ce9d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.031666 containerd[1578]: time="2026-04-16T02:36:55.031597802Z" level=error msg="Failed to destroy network for sandbox \"67987dbaffd9dcf9d91db379966679d0b4576e401e6f5e047649cca182f132ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.046673 containerd[1578]: time="2026-04-16T02:36:55.031884047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68654f5884-znctg,Uid:7752b9e6-a845-431d-bcfd-61c0f25a7158,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61a2858433ef28cd79a8fe9af6e53468c176321ac2e84e958ba35a3229ce9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.046855 containerd[1578]: time="2026-04-16T02:36:55.033588694Z" level=error msg="Failed to destroy network for sandbox \"9eaa525c31657cabe3e116ab430599ec118856b892c1ff20de93d9ee1af428ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.046855 containerd[1578]: time="2026-04-16T02:36:55.033819093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tln7v,Uid:911acf9e-0354-4df7-acfe-04bcad66aad5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67987dbaffd9dcf9d91db379966679d0b4576e401e6f5e047649cca182f132ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.047230 kubelet[2739]: E0416 02:36:55.047179 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61a2858433ef28cd79a8fe9af6e53468c176321ac2e84e958ba35a3229ce9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.047314 kubelet[2739]: E0416 02:36:55.047258 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61a2858433ef28cd79a8fe9af6e53468c176321ac2e84e958ba35a3229ce9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68654f5884-znctg" Apr 16 02:36:55.047314 kubelet[2739]: E0416 02:36:55.047274 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f61a2858433ef28cd79a8fe9af6e53468c176321ac2e84e958ba35a3229ce9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68654f5884-znctg" Apr 16 02:36:55.047375 kubelet[2739]: E0416 02:36:55.047312 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67987dbaffd9dcf9d91db379966679d0b4576e401e6f5e047649cca182f132ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.047375 kubelet[2739]: E0416 02:36:55.047333 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67987dbaffd9dcf9d91db379966679d0b4576e401e6f5e047649cca182f132ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tln7v" Apr 16 02:36:55.047375 kubelet[2739]: E0416 02:36:55.047344 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67987dbaffd9dcf9d91db379966679d0b4576e401e6f5e047649cca182f132ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tln7v" Apr 16 02:36:55.047442 kubelet[2739]: E0416 02:36:55.047383 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tln7v_kube-system(911acf9e-0354-4df7-acfe-04bcad66aad5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tln7v_kube-system(911acf9e-0354-4df7-acfe-04bcad66aad5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67987dbaffd9dcf9d91db379966679d0b4576e401e6f5e047649cca182f132ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tln7v" podUID="911acf9e-0354-4df7-acfe-04bcad66aad5" Apr 16 02:36:55.047537 kubelet[2739]: E0416 02:36:55.047510 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68654f5884-znctg_calico-system(7752b9e6-a845-431d-bcfd-61c0f25a7158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68654f5884-znctg_calico-system(7752b9e6-a845-431d-bcfd-61c0f25a7158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f61a2858433ef28cd79a8fe9af6e53468c176321ac2e84e958ba35a3229ce9d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68654f5884-znctg" podUID="7752b9e6-a845-431d-bcfd-61c0f25a7158" Apr 16 02:36:55.048160 containerd[1578]: time="2026-04-16T02:36:55.048097123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-vh4sj,Uid:39d14cd0-33da-4d8f-9341-376848f3ccb4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eaa525c31657cabe3e116ab430599ec118856b892c1ff20de93d9ee1af428ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.048339 kubelet[2739]: E0416 02:36:55.048312 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eaa525c31657cabe3e116ab430599ec118856b892c1ff20de93d9ee1af428ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 02:36:55.048416 kubelet[2739]: E0416 02:36:55.048349 2739 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eaa525c31657cabe3e116ab430599ec118856b892c1ff20de93d9ee1af428ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d54b44d94-vh4sj" Apr 16 02:36:55.048416 kubelet[2739]: E0416 02:36:55.048362 2739 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eaa525c31657cabe3e116ab430599ec118856b892c1ff20de93d9ee1af428ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5d54b44d94-vh4sj" Apr 16 02:36:55.048416 kubelet[2739]: E0416 02:36:55.048404 2739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d54b44d94-vh4sj_calico-system(39d14cd0-33da-4d8f-9341-376848f3ccb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d54b44d94-vh4sj_calico-system(39d14cd0-33da-4d8f-9341-376848f3ccb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eaa525c31657cabe3e116ab430599ec118856b892c1ff20de93d9ee1af428ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5d54b44d94-vh4sj" podUID="39d14cd0-33da-4d8f-9341-376848f3ccb4" Apr 16 02:36:55.087192 containerd[1578]: time="2026-04-16T02:36:55.086856063Z" level=info msg="StartContainer for \"52519602fef4beebcf33380c4bf065402d07a1b45807a63e78f22c00b3ef3a6f\" returns successfully" Apr 16 02:36:55.913634 kubelet[2739]: I0416 02:36:55.913449 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d8dc2" podStartSLOduration=3.041134992 podStartE2EDuration="15.913434921s" podCreationTimestamp="2026-04-16 02:36:40 +0000 UTC" firstStartedPulling="2026-04-16 02:36:41.048527044 +0000 UTC m=+15.326420944" lastFinishedPulling="2026-04-16 02:36:53.920826974 +0000 UTC m=+28.198720873" observedRunningTime="2026-04-16 02:36:55.911883235 +0000 UTC m=+30.189777133" watchObservedRunningTime="2026-04-16 02:36:55.913434921 +0000 UTC m=+30.191328831" Apr 16 02:36:55.935402 systemd[1]: run-netns-cni\x2d80db36de\x2d6e1f\x2df5ec\x2df702\x2d0be30445afca.mount: Deactivated successfully. Apr 16 02:36:55.935480 systemd[1]: run-netns-cni\x2d4b62f598\x2dd07b\x2dafd2\x2dd2a6\x2d32e45e795622.mount: Deactivated successfully. Apr 16 02:36:55.935519 systemd[1]: run-netns-cni\x2de79d5623\x2d8f78\x2dcfd2\x2d2933\x2d90c5e823a1fb.mount: Deactivated successfully. Apr 16 02:36:55.935556 systemd[1]: run-netns-cni\x2db2de53a9\x2de7e8\x2d5f52\x2d1f43\x2dfe0250aede8d.mount: Deactivated successfully. Apr 16 02:36:55.935589 systemd[1]: run-netns-cni\x2d5def8dc8\x2d48eb\x2d2f7c\x2d636d\x2d48249d943e1a.mount: Deactivated successfully. Apr 16 02:36:55.935623 systemd[1]: run-netns-cni\x2d919200ec\x2da20b\x2d6e71\x2d17f2\x2d392e701128e3.mount: Deactivated successfully. Apr 16 02:36:55.989248 kubelet[2739]: I0416 02:36:55.989173 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-ca-bundle\") pod \"7752b9e6-a845-431d-bcfd-61c0f25a7158\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " Apr 16 02:36:55.989248 kubelet[2739]: I0416 02:36:55.989221 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-backend-key-pair\") pod \"7752b9e6-a845-431d-bcfd-61c0f25a7158\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " Apr 16 02:36:55.989248 kubelet[2739]: I0416 02:36:55.989241 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpxvc\" (UniqueName: \"kubernetes.io/projected/7752b9e6-a845-431d-bcfd-61c0f25a7158-kube-api-access-dpxvc\") pod \"7752b9e6-a845-431d-bcfd-61c0f25a7158\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " Apr 16 02:36:55.989248 kubelet[2739]: I0416 02:36:55.989263 2739 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-nginx-config\") pod \"7752b9e6-a845-431d-bcfd-61c0f25a7158\" (UID: \"7752b9e6-a845-431d-bcfd-61c0f25a7158\") " Apr 16 02:36:55.989665 kubelet[2739]: I0416 02:36:55.989606 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7752b9e6-a845-431d-bcfd-61c0f25a7158" (UID: "7752b9e6-a845-431d-bcfd-61c0f25a7158"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:36:55.989744 kubelet[2739]: I0416 02:36:55.989718 2739 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 16 02:36:55.990192 kubelet[2739]: I0416 02:36:55.990036 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "7752b9e6-a845-431d-bcfd-61c0f25a7158" (UID: "7752b9e6-a845-431d-bcfd-61c0f25a7158"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 02:36:55.993648 systemd[1]: var-lib-kubelet-pods-7752b9e6\x2da845\x2d431d\x2dbcfd\x2d61c0f25a7158-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddpxvc.mount: Deactivated successfully. Apr 16 02:36:55.993739 systemd[1]: var-lib-kubelet-pods-7752b9e6\x2da845\x2d431d\x2dbcfd\x2d61c0f25a7158-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 02:36:55.994012 kubelet[2739]: I0416 02:36:55.993996 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7752b9e6-a845-431d-bcfd-61c0f25a7158-kube-api-access-dpxvc" (OuterVolumeSpecName: "kube-api-access-dpxvc") pod "7752b9e6-a845-431d-bcfd-61c0f25a7158" (UID: "7752b9e6-a845-431d-bcfd-61c0f25a7158"). InnerVolumeSpecName "kube-api-access-dpxvc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 02:36:55.994084 kubelet[2739]: I0416 02:36:55.994005 2739 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7752b9e6-a845-431d-bcfd-61c0f25a7158" (UID: "7752b9e6-a845-431d-bcfd-61c0f25a7158"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 02:36:56.090645 kubelet[2739]: I0416 02:36:56.090566 2739 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dpxvc\" (UniqueName: \"kubernetes.io/projected/7752b9e6-a845-431d-bcfd-61c0f25a7158-kube-api-access-dpxvc\") on node \"localhost\" DevicePath \"\"" Apr 16 02:36:56.090645 kubelet[2739]: I0416 02:36:56.090611 2739 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7752b9e6-a845-431d-bcfd-61c0f25a7158-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 16 02:36:56.090645 kubelet[2739]: I0416 02:36:56.090624 2739 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7752b9e6-a845-431d-bcfd-61c0f25a7158-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 16 02:36:56.895415 systemd[1]: Removed slice kubepods-besteffort-pod7752b9e6_a845_431d_bcfd_61c0f25a7158.slice - libcontainer container kubepods-besteffort-pod7752b9e6_a845_431d_bcfd_61c0f25a7158.slice. Apr 16 02:36:56.953018 systemd[1]: Created slice kubepods-besteffort-podbd2c2517_fd63_4388_8d3d_7d12851f0202.slice - libcontainer container kubepods-besteffort-podbd2c2517_fd63_4388_8d3d_7d12851f0202.slice. Apr 16 02:36:56.995323 kubelet[2739]: I0416 02:36:56.995215 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd2c2517-fd63-4388-8d3d-7d12851f0202-nginx-config\") pod \"whisker-59468fd8d9-gzdg6\" (UID: \"bd2c2517-fd63-4388-8d3d-7d12851f0202\") " pod="calico-system/whisker-59468fd8d9-gzdg6" Apr 16 02:36:56.995323 kubelet[2739]: I0416 02:36:56.995290 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd2c2517-fd63-4388-8d3d-7d12851f0202-whisker-backend-key-pair\") pod \"whisker-59468fd8d9-gzdg6\" (UID: \"bd2c2517-fd63-4388-8d3d-7d12851f0202\") " pod="calico-system/whisker-59468fd8d9-gzdg6" Apr 16 02:36:56.995323 kubelet[2739]: I0416 02:36:56.995327 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vxhh\" (UniqueName: \"kubernetes.io/projected/bd2c2517-fd63-4388-8d3d-7d12851f0202-kube-api-access-6vxhh\") pod \"whisker-59468fd8d9-gzdg6\" (UID: \"bd2c2517-fd63-4388-8d3d-7d12851f0202\") " pod="calico-system/whisker-59468fd8d9-gzdg6" Apr 16 02:36:56.995756 kubelet[2739]: I0416 02:36:56.995456 2739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd2c2517-fd63-4388-8d3d-7d12851f0202-whisker-ca-bundle\") pod \"whisker-59468fd8d9-gzdg6\" (UID: \"bd2c2517-fd63-4388-8d3d-7d12851f0202\") " pod="calico-system/whisker-59468fd8d9-gzdg6" Apr 16 02:36:57.259079 containerd[1578]: time="2026-04-16T02:36:57.258901267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59468fd8d9-gzdg6,Uid:bd2c2517-fd63-4388-8d3d-7d12851f0202,Namespace:calico-system,Attempt:0,}" Apr 16 02:36:57.352631 systemd-networkd[1492]: cali0619fdc8b15: Link UP Apr 16 02:36:57.352966 systemd-networkd[1492]: cali0619fdc8b15: Gained carrier Apr 16 02:36:57.365508 containerd[1578]: 2026-04-16 02:36:57.278 [ERROR][4009] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 02:36:57.365508 containerd[1578]: 2026-04-16 02:36:57.295 [INFO][4009] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59468fd8d9--gzdg6-eth0 whisker-59468fd8d9- calico-system bd2c2517-fd63-4388-8d3d-7d12851f0202 889 0 2026-04-16 02:36:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59468fd8d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59468fd8d9-gzdg6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0619fdc8b15 [] [] }} ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-" Apr 16 02:36:57.365508 containerd[1578]: 2026-04-16 02:36:57.295 [INFO][4009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" Apr 16 02:36:57.365508 containerd[1578]: 2026-04-16 02:36:57.317 [INFO][4024] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" HandleID="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Workload="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.322 [INFO][4024] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" HandleID="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Workload="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59468fd8d9-gzdg6", "timestamp":"2026-04-16 02:36:57.317094246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fc420)} Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.322 [INFO][4024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.322 [INFO][4024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.322 [INFO][4024] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.325 [INFO][4024] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" host="localhost" Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.328 [INFO][4024] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.331 [INFO][4024] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.332 [INFO][4024] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.334 [INFO][4024] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:36:57.365698 containerd[1578]: 2026-04-16 02:36:57.334 [INFO][4024] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" host="localhost" Apr 16 02:36:57.365871 containerd[1578]: 2026-04-16 02:36:57.335 [INFO][4024] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3 Apr 16 02:36:57.365871 containerd[1578]: 2026-04-16 02:36:57.339 [INFO][4024] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" host="localhost" Apr 16 02:36:57.365871 containerd[1578]: 2026-04-16 02:36:57.343 [INFO][4024] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" host="localhost" Apr 16 02:36:57.365871 containerd[1578]: 2026-04-16 02:36:57.343 [INFO][4024] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" host="localhost" Apr 16 02:36:57.365871 containerd[1578]: 2026-04-16 02:36:57.343 [INFO][4024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:36:57.365871 containerd[1578]: 2026-04-16 02:36:57.343 [INFO][4024] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" HandleID="k8s-pod-network.9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Workload="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" Apr 16 02:36:57.365978 containerd[1578]: 2026-04-16 02:36:57.345 [INFO][4009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59468fd8d9--gzdg6-eth0", GenerateName:"whisker-59468fd8d9-", Namespace:"calico-system", SelfLink:"", UID:"bd2c2517-fd63-4388-8d3d-7d12851f0202", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59468fd8d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59468fd8d9-gzdg6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0619fdc8b15", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:36:57.365978 containerd[1578]: 2026-04-16 02:36:57.345 [INFO][4009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" Apr 16 02:36:57.366042 containerd[1578]: 2026-04-16 02:36:57.345 [INFO][4009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0619fdc8b15 ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" Apr 16 02:36:57.366042 containerd[1578]: 2026-04-16 02:36:57.353 [INFO][4009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" Apr 16 02:36:57.366072 containerd[1578]: 2026-04-16 02:36:57.354 [INFO][4009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59468fd8d9--gzdg6-eth0", GenerateName:"whisker-59468fd8d9-", Namespace:"calico-system", SelfLink:"", UID:"bd2c2517-fd63-4388-8d3d-7d12851f0202", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59468fd8d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3", Pod:"whisker-59468fd8d9-gzdg6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0619fdc8b15", MAC:"d2:33:4a:35:f0:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:36:57.366118 containerd[1578]: 2026-04-16 02:36:57.362 [INFO][4009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" Namespace="calico-system" Pod="whisker-59468fd8d9-gzdg6" WorkloadEndpoint="localhost-k8s-whisker--59468fd8d9--gzdg6-eth0" Apr 16 02:36:57.381082 containerd[1578]: time="2026-04-16T02:36:57.381022927Z" level=info msg="connecting to shim 9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3" address="unix:///run/containerd/s/263e1444f2f4add47f8e88fd52cc82c72224b874a661e38f926c018f92c810ae" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:36:57.400291 systemd[1]: Started cri-containerd-9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3.scope - libcontainer container 9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3. Apr 16 02:36:57.410525 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:36:57.440239 containerd[1578]: time="2026-04-16T02:36:57.440198659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59468fd8d9-gzdg6,Uid:bd2c2517-fd63-4388-8d3d-7d12851f0202,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3\"" Apr 16 02:36:57.441646 containerd[1578]: time="2026-04-16T02:36:57.441602267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 02:36:57.802603 kubelet[2739]: I0416 02:36:57.802514 2739 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7752b9e6-a845-431d-bcfd-61c0f25a7158" path="/var/lib/kubelet/pods/7752b9e6-a845-431d-bcfd-61c0f25a7158/volumes" Apr 16 02:36:58.768274 containerd[1578]: time="2026-04-16T02:36:58.768171528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:58.768637 containerd[1578]: time="2026-04-16T02:36:58.768552963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 02:36:58.769581 containerd[1578]: time="2026-04-16T02:36:58.769541872Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:58.771174 containerd[1578]: time="2026-04-16T02:36:58.771108094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:36:58.771672 containerd[1578]: time="2026-04-16T02:36:58.771614094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.329981404s" Apr 16 02:36:58.771672 containerd[1578]: time="2026-04-16T02:36:58.771654516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 02:36:58.774998 containerd[1578]: time="2026-04-16T02:36:58.774970360Z" level=info msg="CreateContainer within sandbox \"9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 02:36:58.783526 containerd[1578]: time="2026-04-16T02:36:58.783469478Z" level=info msg="Container 96807de99c36b093c4d40ff9eeee5252d5d5dedfda3f02feb4efa5a3710f1391: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:36:58.789471 containerd[1578]: time="2026-04-16T02:36:58.789424350Z" level=info msg="CreateContainer within sandbox \"9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"96807de99c36b093c4d40ff9eeee5252d5d5dedfda3f02feb4efa5a3710f1391\"" Apr 16 02:36:58.789897 containerd[1578]: time="2026-04-16T02:36:58.789874953Z" level=info msg="StartContainer for \"96807de99c36b093c4d40ff9eeee5252d5d5dedfda3f02feb4efa5a3710f1391\"" Apr 16 02:36:58.790728 containerd[1578]: time="2026-04-16T02:36:58.790653050Z" level=info msg="connecting to shim 96807de99c36b093c4d40ff9eeee5252d5d5dedfda3f02feb4efa5a3710f1391" address="unix:///run/containerd/s/263e1444f2f4add47f8e88fd52cc82c72224b874a661e38f926c018f92c810ae" protocol=ttrpc version=3 Apr 16 02:36:58.806350 systemd[1]: Started cri-containerd-96807de99c36b093c4d40ff9eeee5252d5d5dedfda3f02feb4efa5a3710f1391.scope - libcontainer container 96807de99c36b093c4d40ff9eeee5252d5d5dedfda3f02feb4efa5a3710f1391. Apr 16 02:36:58.877649 containerd[1578]: time="2026-04-16T02:36:58.877576464Z" level=info msg="StartContainer for \"96807de99c36b093c4d40ff9eeee5252d5d5dedfda3f02feb4efa5a3710f1391\" returns successfully" Apr 16 02:36:58.879040 containerd[1578]: time="2026-04-16T02:36:58.879002463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 02:36:58.931417 systemd-networkd[1492]: cali0619fdc8b15: Gained IPv6LL Apr 16 02:37:00.683096 kubelet[2739]: I0416 02:37:00.683041 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:37:00.683542 kubelet[2739]: E0416 02:37:00.683515 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:00.898738 kubelet[2739]: E0416 02:37:00.898714 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:01.027334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486030019.mount: Deactivated successfully. Apr 16 02:37:01.042239 containerd[1578]: time="2026-04-16T02:37:01.042200549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:01.042693 containerd[1578]: time="2026-04-16T02:37:01.042643076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 02:37:01.043451 containerd[1578]: time="2026-04-16T02:37:01.043418081Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:01.045210 containerd[1578]: time="2026-04-16T02:37:01.045175835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:01.045564 containerd[1578]: time="2026-04-16T02:37:01.045524484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.166489971s" Apr 16 02:37:01.045564 containerd[1578]: time="2026-04-16T02:37:01.045560231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 02:37:01.049025 containerd[1578]: time="2026-04-16T02:37:01.048979327Z" level=info msg="CreateContainer within sandbox \"9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 02:37:01.054464 containerd[1578]: time="2026-04-16T02:37:01.054434131Z" level=info msg="Container db25982796e5863d9dd1132d19a1ecad6eec3cddf223465bce8086bd5dceb7e4: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:01.061157 containerd[1578]: time="2026-04-16T02:37:01.061080628Z" level=info msg="CreateContainer within sandbox \"9f9deddd26eaaa5f474898919dcd35177ad8a0a38549716055c1489b9c7fcea3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"db25982796e5863d9dd1132d19a1ecad6eec3cddf223465bce8086bd5dceb7e4\"" Apr 16 02:37:01.061543 containerd[1578]: time="2026-04-16T02:37:01.061486546Z" level=info msg="StartContainer for \"db25982796e5863d9dd1132d19a1ecad6eec3cddf223465bce8086bd5dceb7e4\"" Apr 16 02:37:01.062263 containerd[1578]: time="2026-04-16T02:37:01.062233674Z" level=info msg="connecting to shim db25982796e5863d9dd1132d19a1ecad6eec3cddf223465bce8086bd5dceb7e4" address="unix:///run/containerd/s/263e1444f2f4add47f8e88fd52cc82c72224b874a661e38f926c018f92c810ae" protocol=ttrpc version=3 Apr 16 02:37:01.079290 systemd[1]: Started cri-containerd-db25982796e5863d9dd1132d19a1ecad6eec3cddf223465bce8086bd5dceb7e4.scope - libcontainer container db25982796e5863d9dd1132d19a1ecad6eec3cddf223465bce8086bd5dceb7e4. Apr 16 02:37:01.120171 containerd[1578]: time="2026-04-16T02:37:01.120082836Z" level=info msg="StartContainer for \"db25982796e5863d9dd1132d19a1ecad6eec3cddf223465bce8086bd5dceb7e4\" returns successfully" Apr 16 02:37:01.769651 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:43902.service - OpenSSH per-connection server daemon (10.0.0.1:43902). Apr 16 02:37:01.838913 sshd[4342]: Accepted publickey for core from 10.0.0.1 port 43902 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:01.840221 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:01.848317 systemd-logind[1564]: New session 8 of user core. Apr 16 02:37:01.854276 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 02:37:01.990447 sshd[4356]: Connection closed by 10.0.0.1 port 43902 Apr 16 02:37:01.991327 sshd-session[4342]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:01.994194 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:43902.service: Deactivated successfully. Apr 16 02:37:01.995525 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 02:37:01.996103 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Apr 16 02:37:01.997046 systemd-logind[1564]: Removed session 8. Apr 16 02:37:02.125491 systemd-networkd[1492]: vxlan.calico: Link UP Apr 16 02:37:02.125497 systemd-networkd[1492]: vxlan.calico: Gained carrier Apr 16 02:37:03.155316 systemd-networkd[1492]: vxlan.calico: Gained IPv6LL Apr 16 02:37:06.797413 kubelet[2739]: E0416 02:37:06.797361 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:06.798055 containerd[1578]: time="2026-04-16T02:37:06.797894014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tln7v,Uid:911acf9e-0354-4df7-acfe-04bcad66aad5,Namespace:kube-system,Attempt:0,}" Apr 16 02:37:06.885941 systemd-networkd[1492]: cali17470c16f76: Link UP Apr 16 02:37:06.887295 systemd-networkd[1492]: cali17470c16f76: Gained carrier Apr 16 02:37:06.896161 kubelet[2739]: I0416 02:37:06.896003 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-59468fd8d9-gzdg6" podStartSLOduration=7.29105715 podStartE2EDuration="10.895959473s" podCreationTimestamp="2026-04-16 02:36:56 +0000 UTC" firstStartedPulling="2026-04-16 02:36:57.441344623 +0000 UTC m=+31.719238527" lastFinishedPulling="2026-04-16 02:37:01.046246951 +0000 UTC m=+35.324140850" observedRunningTime="2026-04-16 02:37:01.938299946 +0000 UTC m=+36.216193855" watchObservedRunningTime="2026-04-16 02:37:06.895959473 +0000 UTC m=+41.173853371" Apr 16 02:37:06.897841 containerd[1578]: 2026-04-16 02:37:06.834 [INFO][4472] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--tln7v-eth0 coredns-66bc5c9577- kube-system 911acf9e-0354-4df7-acfe-04bcad66aad5 828 0 2026-04-16 02:36:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-tln7v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17470c16f76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-" Apr 16 02:37:06.897841 containerd[1578]: 2026-04-16 02:37:06.835 [INFO][4472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" Apr 16 02:37:06.897841 containerd[1578]: 2026-04-16 02:37:06.855 [INFO][4487] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" HandleID="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Workload="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.860 [INFO][4487] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" HandleID="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Workload="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00059c980), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-tln7v", "timestamp":"2026-04-16 02:37:06.855559349 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003d0420)} Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.860 [INFO][4487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.860 [INFO][4487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.860 [INFO][4487] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.862 [INFO][4487] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" host="localhost" Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.865 [INFO][4487] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.869 [INFO][4487] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.870 [INFO][4487] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.872 [INFO][4487] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:06.898062 containerd[1578]: 2026-04-16 02:37:06.872 [INFO][4487] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" host="localhost" Apr 16 02:37:06.898416 containerd[1578]: 2026-04-16 02:37:06.873 [INFO][4487] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7 Apr 16 02:37:06.898416 containerd[1578]: 2026-04-16 02:37:06.876 [INFO][4487] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" host="localhost" Apr 16 02:37:06.898416 containerd[1578]: 2026-04-16 02:37:06.882 [INFO][4487] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" host="localhost" Apr 16 02:37:06.898416 containerd[1578]: 2026-04-16 02:37:06.882 [INFO][4487] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" host="localhost" Apr 16 02:37:06.898416 containerd[1578]: 2026-04-16 02:37:06.882 [INFO][4487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:37:06.898416 containerd[1578]: 2026-04-16 02:37:06.882 [INFO][4487] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" HandleID="k8s-pod-network.873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Workload="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" Apr 16 02:37:06.898636 containerd[1578]: 2026-04-16 02:37:06.883 [INFO][4472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tln7v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"911acf9e-0354-4df7-acfe-04bcad66aad5", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-tln7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17470c16f76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:06.898636 containerd[1578]: 2026-04-16 02:37:06.884 [INFO][4472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" Apr 16 02:37:06.898636 containerd[1578]: 2026-04-16 02:37:06.884 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17470c16f76 ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" Apr 16 02:37:06.898636 containerd[1578]: 2026-04-16 02:37:06.888 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" Apr 16 02:37:06.898636 containerd[1578]: 2026-04-16 02:37:06.888 [INFO][4472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tln7v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"911acf9e-0354-4df7-acfe-04bcad66aad5", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7", Pod:"coredns-66bc5c9577-tln7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17470c16f76", MAC:"1e:88:46:00:67:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:06.898636 containerd[1578]: 2026-04-16 02:37:06.895 [INFO][4472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" Namespace="kube-system" Pod="coredns-66bc5c9577-tln7v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tln7v-eth0" Apr 16 02:37:06.917599 containerd[1578]: time="2026-04-16T02:37:06.917566066Z" level=info msg="connecting to shim 873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7" address="unix:///run/containerd/s/d45ed99115fe0b3272f801b9253c716c2e2387428346f2fc2d44274d460dcabc" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:37:06.940324 systemd[1]: Started cri-containerd-873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7.scope - libcontainer container 873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7. Apr 16 02:37:06.949512 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:37:06.975805 containerd[1578]: time="2026-04-16T02:37:06.975777369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tln7v,Uid:911acf9e-0354-4df7-acfe-04bcad66aad5,Namespace:kube-system,Attempt:0,} returns sandbox id \"873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7\"" Apr 16 02:37:06.976417 kubelet[2739]: E0416 02:37:06.976386 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:06.979444 containerd[1578]: time="2026-04-16T02:37:06.979416824Z" level=info msg="CreateContainer within sandbox \"873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 02:37:06.998168 containerd[1578]: time="2026-04-16T02:37:06.997285663Z" level=info msg="Container 7e803b159024ba37929bb9bb33309e520797817f6761e1ef418a00da1829fc98: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:07.002629 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:43750.service - OpenSSH per-connection server daemon (10.0.0.1:43750). Apr 16 02:37:07.007159 containerd[1578]: time="2026-04-16T02:37:07.006824908Z" level=info msg="CreateContainer within sandbox \"873a52703b76fd73e9b8e8d2b2f6614a601749c4cc2471fe82442eba18c5e3b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e803b159024ba37929bb9bb33309e520797817f6761e1ef418a00da1829fc98\"" Apr 16 02:37:07.009301 containerd[1578]: time="2026-04-16T02:37:07.009256240Z" level=info msg="StartContainer for \"7e803b159024ba37929bb9bb33309e520797817f6761e1ef418a00da1829fc98\"" Apr 16 02:37:07.010752 containerd[1578]: time="2026-04-16T02:37:07.010708122Z" level=info msg="connecting to shim 7e803b159024ba37929bb9bb33309e520797817f6761e1ef418a00da1829fc98" address="unix:///run/containerd/s/d45ed99115fe0b3272f801b9253c716c2e2387428346f2fc2d44274d460dcabc" protocol=ttrpc version=3 Apr 16 02:37:07.035302 systemd[1]: Started cri-containerd-7e803b159024ba37929bb9bb33309e520797817f6761e1ef418a00da1829fc98.scope - libcontainer container 7e803b159024ba37929bb9bb33309e520797817f6761e1ef418a00da1829fc98. Apr 16 02:37:07.046665 sshd[4562]: Accepted publickey for core from 10.0.0.1 port 43750 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:07.047789 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:07.052705 systemd-logind[1564]: New session 9 of user core. Apr 16 02:37:07.057279 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 02:37:07.065393 containerd[1578]: time="2026-04-16T02:37:07.065290426Z" level=info msg="StartContainer for \"7e803b159024ba37929bb9bb33309e520797817f6761e1ef418a00da1829fc98\" returns successfully" Apr 16 02:37:07.121702 sshd[4594]: Connection closed by 10.0.0.1 port 43750 Apr 16 02:37:07.122171 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:07.125213 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:43750.service: Deactivated successfully. Apr 16 02:37:07.126682 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 02:37:07.127488 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Apr 16 02:37:07.128418 systemd-logind[1564]: Removed session 9. Apr 16 02:37:07.797414 containerd[1578]: time="2026-04-16T02:37:07.797364488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-4wngf,Uid:8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7,Namespace:calico-system,Attempt:0,}" Apr 16 02:37:07.799178 containerd[1578]: time="2026-04-16T02:37:07.799096017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-vh4sj,Uid:39d14cd0-33da-4d8f-9341-376848f3ccb4,Namespace:calico-system,Attempt:0,}" Apr 16 02:37:07.800168 containerd[1578]: time="2026-04-16T02:37:07.800142984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m5mwv,Uid:3833f640-bfff-4575-abe3-06fdc906d199,Namespace:calico-system,Attempt:0,}" Apr 16 02:37:07.804320 containerd[1578]: time="2026-04-16T02:37:07.804179357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h7vtx,Uid:338dbfb9-c6e4-4a9b-830f-eac73644e324,Namespace:calico-system,Attempt:0,}" Apr 16 02:37:07.907539 systemd-networkd[1492]: cali915f825c452: Link UP Apr 16 02:37:07.907821 systemd-networkd[1492]: cali915f825c452: Gained carrier Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.839 [INFO][4639] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0 calico-apiserver-5d54b44d94- calico-system 39d14cd0-33da-4d8f-9341-376848f3ccb4 829 0 2026-04-16 02:36:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d54b44d94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d54b44d94-vh4sj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali915f825c452 [] [] }} ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.839 [INFO][4639] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.869 [INFO][4686] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" HandleID="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Workload="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.875 [INFO][4686] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" HandleID="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Workload="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276b50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5d54b44d94-vh4sj", "timestamp":"2026-04-16 02:37:07.86999411 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a9080)} Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.875 [INFO][4686] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.875 [INFO][4686] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.875 [INFO][4686] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.876 [INFO][4686] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.881 [INFO][4686] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.885 [INFO][4686] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.887 [INFO][4686] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.891 [INFO][4686] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.891 [INFO][4686] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.893 [INFO][4686] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.897 [INFO][4686] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.902 [INFO][4686] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.902 [INFO][4686] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" host="localhost" Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.902 [INFO][4686] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:37:07.916353 containerd[1578]: 2026-04-16 02:37:07.902 [INFO][4686] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" HandleID="k8s-pod-network.7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Workload="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" Apr 16 02:37:07.916819 containerd[1578]: 2026-04-16 02:37:07.904 [INFO][4639] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0", GenerateName:"calico-apiserver-5d54b44d94-", Namespace:"calico-system", SelfLink:"", UID:"39d14cd0-33da-4d8f-9341-376848f3ccb4", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54b44d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d54b44d94-vh4sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali915f825c452", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:07.916819 containerd[1578]: 2026-04-16 02:37:07.904 [INFO][4639] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" Apr 16 02:37:07.916819 containerd[1578]: 2026-04-16 02:37:07.904 [INFO][4639] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali915f825c452 ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" Apr 16 02:37:07.916819 containerd[1578]: 2026-04-16 02:37:07.907 [INFO][4639] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" Apr 16 02:37:07.916819 containerd[1578]: 2026-04-16 02:37:07.908 [INFO][4639] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0", GenerateName:"calico-apiserver-5d54b44d94-", Namespace:"calico-system", SelfLink:"", UID:"39d14cd0-33da-4d8f-9341-376848f3ccb4", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54b44d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd", Pod:"calico-apiserver-5d54b44d94-vh4sj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali915f825c452", MAC:"ae:1d:13:f6:a8:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:07.916819 containerd[1578]: 2026-04-16 02:37:07.914 [INFO][4639] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-vh4sj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--vh4sj-eth0" Apr 16 02:37:07.924702 kubelet[2739]: E0416 02:37:07.924645 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:07.938173 kubelet[2739]: I0416 02:37:07.937427 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tln7v" podStartSLOduration=35.937410843 podStartE2EDuration="35.937410843s" podCreationTimestamp="2026-04-16 02:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:37:07.936887407 +0000 UTC m=+42.214781309" watchObservedRunningTime="2026-04-16 02:37:07.937410843 +0000 UTC m=+42.215304749" Apr 16 02:37:07.944931 containerd[1578]: time="2026-04-16T02:37:07.944863417Z" level=info msg="connecting to shim 7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd" address="unix:///run/containerd/s/7056186062b795fe9366d68398e17e657712d26500e34fcb9538b6f0cdf18d44" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:37:07.973397 systemd[1]: Started cri-containerd-7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd.scope - libcontainer container 7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd. Apr 16 02:37:07.985019 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:37:08.002015 systemd-networkd[1492]: cali9a07e7303c8: Link UP Apr 16 02:37:08.003085 systemd-networkd[1492]: cali9a07e7303c8: Gained carrier Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.833 [INFO][4627] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0 calico-apiserver-5d54b44d94- calico-system 8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7 821 0 2026-04-16 02:36:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d54b44d94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d54b44d94-4wngf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9a07e7303c8 [] [] }} ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.834 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.867 [INFO][4681] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" HandleID="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Workload="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.879 [INFO][4681] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" HandleID="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Workload="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002777b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5d54b44d94-4wngf", "timestamp":"2026-04-16 02:37:07.867748987 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000246000)} Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.879 [INFO][4681] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.902 [INFO][4681] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.902 [INFO][4681] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.978 [INFO][4681] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.982 [INFO][4681] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.986 [INFO][4681] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.988 [INFO][4681] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.989 [INFO][4681] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.989 [INFO][4681] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.990 [INFO][4681] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4 Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.993 [INFO][4681] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.998 [INFO][4681] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.998 [INFO][4681] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" host="localhost" Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.998 [INFO][4681] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:37:08.013164 containerd[1578]: 2026-04-16 02:37:07.998 [INFO][4681] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" HandleID="k8s-pod-network.7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Workload="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" Apr 16 02:37:08.013594 containerd[1578]: 2026-04-16 02:37:08.000 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0", GenerateName:"calico-apiserver-5d54b44d94-", Namespace:"calico-system", SelfLink:"", UID:"8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54b44d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d54b44d94-4wngf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a07e7303c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:08.013594 containerd[1578]: 2026-04-16 02:37:08.000 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" Apr 16 02:37:08.013594 containerd[1578]: 2026-04-16 02:37:08.000 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a07e7303c8 ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" Apr 16 02:37:08.013594 containerd[1578]: 2026-04-16 02:37:08.002 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" Apr 16 02:37:08.013594 containerd[1578]: 2026-04-16 02:37:08.002 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0", GenerateName:"calico-apiserver-5d54b44d94-", Namespace:"calico-system", SelfLink:"", UID:"8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d54b44d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4", Pod:"calico-apiserver-5d54b44d94-4wngf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9a07e7303c8", MAC:"be:fa:e8:29:c9:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:08.013594 containerd[1578]: 2026-04-16 02:37:08.010 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" Namespace="calico-system" Pod="calico-apiserver-5d54b44d94-4wngf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d54b44d94--4wngf-eth0" Apr 16 02:37:08.027226 containerd[1578]: time="2026-04-16T02:37:08.027176098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-vh4sj,Uid:39d14cd0-33da-4d8f-9341-376848f3ccb4,Namespace:calico-system,Attempt:0,} returns sandbox id \"7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd\"" Apr 16 02:37:08.029751 containerd[1578]: time="2026-04-16T02:37:08.029571636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 02:37:08.034816 containerd[1578]: time="2026-04-16T02:37:08.034763273Z" level=info msg="connecting to shim 7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4" address="unix:///run/containerd/s/269c965057782ed177d9e8472459bad6699eb037287a7280e1f80b5066c82d2f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:37:08.056295 systemd[1]: Started cri-containerd-7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4.scope - libcontainer container 7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4. Apr 16 02:37:08.065147 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:37:08.096717 containerd[1578]: time="2026-04-16T02:37:08.096485782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d54b44d94-4wngf,Uid:8f7b7bfb-5c78-492d-a062-8ba6efa8d0a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4\"" Apr 16 02:37:08.108189 systemd-networkd[1492]: calicc949e0e29e: Link UP Apr 16 02:37:08.109005 systemd-networkd[1492]: calicc949e0e29e: Gained carrier Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:07.851 [INFO][4651] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0 goldmane-cccfbd5cf- calico-system 338dbfb9-c6e4-4a9b-830f-eac73644e324 830 0 2026-04-16 02:36:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-h7vtx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicc949e0e29e [] [] }} ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:07.851 [INFO][4651] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:07.889 [INFO][4698] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" HandleID="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Workload="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:07.894 [INFO][4698] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" HandleID="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Workload="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005c9e60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-h7vtx", "timestamp":"2026-04-16 02:37:07.889648036 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a2580)} Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:07.894 [INFO][4698] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:07.998 [INFO][4698] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:07.998 [INFO][4698] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.078 [INFO][4698] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.085 [INFO][4698] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.089 [INFO][4698] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.092 [INFO][4698] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.094 [INFO][4698] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.094 [INFO][4698] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.096 [INFO][4698] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080 Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.099 [INFO][4698] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.104 [INFO][4698] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.105 [INFO][4698] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" host="localhost" Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.105 [INFO][4698] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:37:08.121121 containerd[1578]: 2026-04-16 02:37:08.105 [INFO][4698] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" HandleID="k8s-pod-network.3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Workload="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" Apr 16 02:37:08.121556 containerd[1578]: 2026-04-16 02:37:08.106 [INFO][4651] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"338dbfb9-c6e4-4a9b-830f-eac73644e324", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-h7vtx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc949e0e29e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:08.121556 containerd[1578]: 2026-04-16 02:37:08.106 [INFO][4651] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" Apr 16 02:37:08.121556 containerd[1578]: 2026-04-16 02:37:08.106 [INFO][4651] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc949e0e29e ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" Apr 16 02:37:08.121556 containerd[1578]: 2026-04-16 02:37:08.109 [INFO][4651] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" Apr 16 02:37:08.121556 containerd[1578]: 2026-04-16 02:37:08.109 [INFO][4651] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"338dbfb9-c6e4-4a9b-830f-eac73644e324", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080", Pod:"goldmane-cccfbd5cf-h7vtx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc949e0e29e", MAC:"9e:7d:ff:9a:2d:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:08.121556 containerd[1578]: 2026-04-16 02:37:08.117 [INFO][4651] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" Namespace="calico-system" Pod="goldmane-cccfbd5cf-h7vtx" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--h7vtx-eth0" Apr 16 02:37:08.139526 containerd[1578]: time="2026-04-16T02:37:08.139479829Z" level=info msg="connecting to shim 3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080" address="unix:///run/containerd/s/45bfbbcabf2796332a77a5a5fcb513155f7e426e0e098a9bdaff6e2408d9b1f8" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:37:08.157435 systemd[1]: Started cri-containerd-3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080.scope - libcontainer container 3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080. Apr 16 02:37:08.166290 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:37:08.199729 containerd[1578]: time="2026-04-16T02:37:08.199694473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-h7vtx,Uid:338dbfb9-c6e4-4a9b-830f-eac73644e324,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080\"" Apr 16 02:37:08.209372 systemd-networkd[1492]: cali31eaed5238b: Link UP Apr 16 02:37:08.210304 systemd-networkd[1492]: cali31eaed5238b: Gained carrier Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:07.865 [INFO][4657] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m5mwv-eth0 csi-node-driver- calico-system 3833f640-bfff-4575-abe3-06fdc906d199 699 0 2026-04-16 02:36:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m5mwv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali31eaed5238b [] [] }} ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:07.865 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-eth0" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:07.893 [INFO][4706] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" HandleID="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Workload="localhost-k8s-csi--node--driver--m5mwv-eth0" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:07.898 [INFO][4706] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" HandleID="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Workload="localhost-k8s-csi--node--driver--m5mwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b9ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m5mwv", "timestamp":"2026-04-16 02:37:07.893653785 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040fce0)} Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:07.899 [INFO][4706] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.105 [INFO][4706] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.105 [INFO][4706] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.179 [INFO][4706] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.185 [INFO][4706] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.190 [INFO][4706] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.191 [INFO][4706] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.194 [INFO][4706] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.194 [INFO][4706] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.196 [INFO][4706] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.200 [INFO][4706] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.205 [INFO][4706] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.205 [INFO][4706] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" host="localhost" Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.205 [INFO][4706] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:37:08.226751 containerd[1578]: 2026-04-16 02:37:08.205 [INFO][4706] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" HandleID="k8s-pod-network.1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Workload="localhost-k8s-csi--node--driver--m5mwv-eth0" Apr 16 02:37:08.227357 containerd[1578]: 2026-04-16 02:37:08.207 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m5mwv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3833f640-bfff-4575-abe3-06fdc906d199", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m5mwv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31eaed5238b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:08.227357 containerd[1578]: 2026-04-16 02:37:08.207 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-eth0" Apr 16 02:37:08.227357 containerd[1578]: 2026-04-16 02:37:08.207 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31eaed5238b ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-eth0" Apr 16 02:37:08.227357 containerd[1578]: 2026-04-16 02:37:08.211 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-eth0" Apr 16 02:37:08.227357 containerd[1578]: 2026-04-16 02:37:08.214 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m5mwv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3833f640-bfff-4575-abe3-06fdc906d199", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e", Pod:"csi-node-driver-m5mwv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31eaed5238b", MAC:"2a:f3:45:56:eb:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:08.227357 containerd[1578]: 2026-04-16 02:37:08.224 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" Namespace="calico-system" Pod="csi-node-driver-m5mwv" WorkloadEndpoint="localhost-k8s-csi--node--driver--m5mwv-eth0" Apr 16 02:37:08.243727 containerd[1578]: time="2026-04-16T02:37:08.243484341Z" level=info msg="connecting to shim 1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e" address="unix:///run/containerd/s/b404bddbd8ba614609a2863188d6465706804f42c4ee92d6f547f71f74dd1e82" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:37:08.264354 systemd[1]: Started cri-containerd-1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e.scope - libcontainer container 1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e. Apr 16 02:37:08.272702 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:37:08.282790 containerd[1578]: time="2026-04-16T02:37:08.282745452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m5mwv,Uid:3833f640-bfff-4575-abe3-06fdc906d199,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e\"" Apr 16 02:37:08.915445 systemd-networkd[1492]: cali17470c16f76: Gained IPv6LL Apr 16 02:37:08.929465 kubelet[2739]: E0416 02:37:08.929425 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:09.364935 systemd-networkd[1492]: cali31eaed5238b: Gained IPv6LL Apr 16 02:37:09.427388 systemd-networkd[1492]: cali915f825c452: Gained IPv6LL Apr 16 02:37:09.620816 systemd-networkd[1492]: cali9a07e7303c8: Gained IPv6LL Apr 16 02:37:09.796649 kubelet[2739]: E0416 02:37:09.796609 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:09.797393 containerd[1578]: time="2026-04-16T02:37:09.797027426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gq4gt,Uid:ab52d165-5aee-45ff-9e0b-6d96530041af,Namespace:kube-system,Attempt:0,}" Apr 16 02:37:09.799121 containerd[1578]: time="2026-04-16T02:37:09.798864154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c55d9c-kstss,Uid:35a1f297-9e71-48ed-a8c9-4d9ebba86e8f,Namespace:calico-system,Attempt:0,}" Apr 16 02:37:09.907346 systemd-networkd[1492]: cali007f88d6622: Link UP Apr 16 02:37:09.907559 systemd-networkd[1492]: cali007f88d6622: Gained carrier Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.842 [INFO][5002] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0 calico-kube-controllers-5c6c55d9c- calico-system 35a1f297-9e71-48ed-a8c9-4d9ebba86e8f 832 0 2026-04-16 02:36:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c6c55d9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c6c55d9c-kstss eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali007f88d6622 [] [] }} ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.842 [INFO][5002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.868 [INFO][5031] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" HandleID="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Workload="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.876 [INFO][5031] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" HandleID="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Workload="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000436ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c6c55d9c-kstss", "timestamp":"2026-04-16 02:37:09.868954229 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000398420)} Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.876 [INFO][5031] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.876 [INFO][5031] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.876 [INFO][5031] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.878 [INFO][5031] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.882 [INFO][5031] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.886 [INFO][5031] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.887 [INFO][5031] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.890 [INFO][5031] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.890 [INFO][5031] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.891 [INFO][5031] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3 Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.895 [INFO][5031] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.900 [INFO][5031] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.900 [INFO][5031] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" host="localhost" Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.900 [INFO][5031] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:37:09.927931 containerd[1578]: 2026-04-16 02:37:09.900 [INFO][5031] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" HandleID="k8s-pod-network.c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Workload="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" Apr 16 02:37:09.928542 containerd[1578]: 2026-04-16 02:37:09.902 [INFO][5002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0", GenerateName:"calico-kube-controllers-5c6c55d9c-", Namespace:"calico-system", SelfLink:"", UID:"35a1f297-9e71-48ed-a8c9-4d9ebba86e8f", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c6c55d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c6c55d9c-kstss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali007f88d6622", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:09.928542 containerd[1578]: 2026-04-16 02:37:09.902 [INFO][5002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" Apr 16 02:37:09.928542 containerd[1578]: 2026-04-16 02:37:09.902 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali007f88d6622 ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" Apr 16 02:37:09.928542 containerd[1578]: 2026-04-16 02:37:09.908 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" Apr 16 02:37:09.928542 containerd[1578]: 2026-04-16 02:37:09.909 [INFO][5002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0", GenerateName:"calico-kube-controllers-5c6c55d9c-", Namespace:"calico-system", SelfLink:"", UID:"35a1f297-9e71-48ed-a8c9-4d9ebba86e8f", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c6c55d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3", Pod:"calico-kube-controllers-5c6c55d9c-kstss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali007f88d6622", MAC:"3a:01:d9:62:4f:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:09.928542 containerd[1578]: 2026-04-16 02:37:09.926 [INFO][5002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" Namespace="calico-system" Pod="calico-kube-controllers-5c6c55d9c-kstss" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6c55d9c--kstss-eth0" Apr 16 02:37:09.953335 containerd[1578]: time="2026-04-16T02:37:09.953282863Z" level=info msg="connecting to shim c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3" address="unix:///run/containerd/s/105f34d1a1cfa361f15010332b74093d228dca89be03934d420ad431a538b6bc" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:37:09.992420 systemd[1]: Started cri-containerd-c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3.scope - libcontainer container c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3. Apr 16 02:37:10.003335 systemd-networkd[1492]: calicc949e0e29e: Gained IPv6LL Apr 16 02:37:10.006898 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:37:10.022847 systemd-networkd[1492]: cali0864cde9a1d: Link UP Apr 16 02:37:10.023025 systemd-networkd[1492]: cali0864cde9a1d: Gained carrier Apr 16 02:37:10.163490 containerd[1578]: time="2026-04-16T02:37:10.163171195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6c55d9c-kstss,Uid:35a1f297-9e71-48ed-a8c9-4d9ebba86e8f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3\"" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.843 [INFO][5001] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--gq4gt-eth0 coredns-66bc5c9577- kube-system ab52d165-5aee-45ff-9e0b-6d96530041af 826 0 2026-04-16 02:36:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-gq4gt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0864cde9a1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.843 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.881 [INFO][5032] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" HandleID="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Workload="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.887 [INFO][5032] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" HandleID="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Workload="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135e90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-gq4gt", "timestamp":"2026-04-16 02:37:09.881692916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017a2c0)} Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.887 [INFO][5032] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.900 [INFO][5032] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.900 [INFO][5032] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.981 [INFO][5032] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.987 [INFO][5032] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.992 [INFO][5032] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.994 [INFO][5032] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.997 [INFO][5032] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.997 [INFO][5032] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:09.999 [INFO][5032] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5 Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:10.006 [INFO][5032] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:10.014 [INFO][5032] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:10.014 [INFO][5032] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" host="localhost" Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:10.014 [INFO][5032] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 02:37:10.173343 containerd[1578]: 2026-04-16 02:37:10.014 [INFO][5032] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" HandleID="k8s-pod-network.3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Workload="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" Apr 16 02:37:10.173792 containerd[1578]: 2026-04-16 02:37:10.018 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gq4gt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ab52d165-5aee-45ff-9e0b-6d96530041af", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-gq4gt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0864cde9a1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:10.173792 containerd[1578]: 2026-04-16 02:37:10.018 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" Apr 16 02:37:10.173792 containerd[1578]: 2026-04-16 02:37:10.019 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0864cde9a1d ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" Apr 16 02:37:10.173792 containerd[1578]: 2026-04-16 02:37:10.023 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" Apr 16 02:37:10.173792 containerd[1578]: 2026-04-16 02:37:10.023 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--gq4gt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ab52d165-5aee-45ff-9e0b-6d96530041af", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 2, 36, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5", Pod:"coredns-66bc5c9577-gq4gt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0864cde9a1d", MAC:"5e:59:cc:e1:a1:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 02:37:10.173792 containerd[1578]: 2026-04-16 02:37:10.167 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" Namespace="kube-system" Pod="coredns-66bc5c9577-gq4gt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--gq4gt-eth0" Apr 16 02:37:10.203546 containerd[1578]: time="2026-04-16T02:37:10.202860608Z" level=info msg="connecting to shim 3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5" address="unix:///run/containerd/s/ff7355bed6d9cedb7e8064033a55e1ae31c74589ec67bac1e4f93bab7437b979" namespace=k8s.io protocol=ttrpc version=3 Apr 16 02:37:10.225284 systemd[1]: Started cri-containerd-3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5.scope - libcontainer container 3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5. Apr 16 02:37:10.234532 systemd-resolved[1494]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 02:37:10.268831 containerd[1578]: time="2026-04-16T02:37:10.268769769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gq4gt,Uid:ab52d165-5aee-45ff-9e0b-6d96530041af,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5\"" Apr 16 02:37:10.269913 kubelet[2739]: E0416 02:37:10.269887 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:10.275748 containerd[1578]: time="2026-04-16T02:37:10.275708536Z" level=info msg="CreateContainer within sandbox \"3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 02:37:10.283704 containerd[1578]: time="2026-04-16T02:37:10.283616080Z" level=info msg="Container 6c12bf69878f6274a247025fe1e2870d22eff0fee62f50530e10dc54604a333c: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:10.288865 containerd[1578]: time="2026-04-16T02:37:10.288839825Z" level=info msg="CreateContainer within sandbox \"3cbd1f4457648517192179ff8f9c8f000921c99b5d984e396fd743939930bda5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c12bf69878f6274a247025fe1e2870d22eff0fee62f50530e10dc54604a333c\"" Apr 16 02:37:10.289329 containerd[1578]: time="2026-04-16T02:37:10.289274680Z" level=info msg="StartContainer for \"6c12bf69878f6274a247025fe1e2870d22eff0fee62f50530e10dc54604a333c\"" Apr 16 02:37:10.290316 containerd[1578]: time="2026-04-16T02:37:10.290285304Z" level=info msg="connecting to shim 6c12bf69878f6274a247025fe1e2870d22eff0fee62f50530e10dc54604a333c" address="unix:///run/containerd/s/ff7355bed6d9cedb7e8064033a55e1ae31c74589ec67bac1e4f93bab7437b979" protocol=ttrpc version=3 Apr 16 02:37:10.311438 systemd[1]: Started cri-containerd-6c12bf69878f6274a247025fe1e2870d22eff0fee62f50530e10dc54604a333c.scope - libcontainer container 6c12bf69878f6274a247025fe1e2870d22eff0fee62f50530e10dc54604a333c. Apr 16 02:37:10.340618 containerd[1578]: time="2026-04-16T02:37:10.340589245Z" level=info msg="StartContainer for \"6c12bf69878f6274a247025fe1e2870d22eff0fee62f50530e10dc54604a333c\" returns successfully" Apr 16 02:37:10.500761 containerd[1578]: time="2026-04-16T02:37:10.500663063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 02:37:10.503947 containerd[1578]: time="2026-04-16T02:37:10.503908610Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.473831858s" Apr 16 02:37:10.503947 containerd[1578]: time="2026-04-16T02:37:10.503939306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 02:37:10.504783 containerd[1578]: time="2026-04-16T02:37:10.504746330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:10.504886 containerd[1578]: time="2026-04-16T02:37:10.504813334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 02:37:10.505289 containerd[1578]: time="2026-04-16T02:37:10.505265565Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:10.505732 containerd[1578]: time="2026-04-16T02:37:10.505679435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:10.507195 containerd[1578]: time="2026-04-16T02:37:10.507172857Z" level=info msg="CreateContainer within sandbox \"7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 02:37:10.512169 containerd[1578]: time="2026-04-16T02:37:10.512116233Z" level=info msg="Container b8ed759afcd7a7cd9063f5a4e3efa31b5deb1b56a05b24e8e60cb66e11c899ee: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:10.517724 containerd[1578]: time="2026-04-16T02:37:10.517685453Z" level=info msg="CreateContainer within sandbox \"7fb890a8cd2f5ed9b2235b28e30dfc135dce1d201cf635f3df27755f938aadbd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b8ed759afcd7a7cd9063f5a4e3efa31b5deb1b56a05b24e8e60cb66e11c899ee\"" Apr 16 02:37:10.518109 containerd[1578]: time="2026-04-16T02:37:10.518065726Z" level=info msg="StartContainer for \"b8ed759afcd7a7cd9063f5a4e3efa31b5deb1b56a05b24e8e60cb66e11c899ee\"" Apr 16 02:37:10.518915 containerd[1578]: time="2026-04-16T02:37:10.518893956Z" level=info msg="connecting to shim b8ed759afcd7a7cd9063f5a4e3efa31b5deb1b56a05b24e8e60cb66e11c899ee" address="unix:///run/containerd/s/7056186062b795fe9366d68398e17e657712d26500e34fcb9538b6f0cdf18d44" protocol=ttrpc version=3 Apr 16 02:37:10.534427 systemd[1]: Started cri-containerd-b8ed759afcd7a7cd9063f5a4e3efa31b5deb1b56a05b24e8e60cb66e11c899ee.scope - libcontainer container b8ed759afcd7a7cd9063f5a4e3efa31b5deb1b56a05b24e8e60cb66e11c899ee. Apr 16 02:37:10.572923 containerd[1578]: time="2026-04-16T02:37:10.572883627Z" level=info msg="StartContainer for \"b8ed759afcd7a7cd9063f5a4e3efa31b5deb1b56a05b24e8e60cb66e11c899ee\" returns successfully" Apr 16 02:37:10.925747 containerd[1578]: time="2026-04-16T02:37:10.925707282Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:10.926460 containerd[1578]: time="2026-04-16T02:37:10.926436361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 02:37:10.927636 containerd[1578]: time="2026-04-16T02:37:10.927597597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 422.714337ms" Apr 16 02:37:10.927677 containerd[1578]: time="2026-04-16T02:37:10.927635345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 02:37:10.928652 containerd[1578]: time="2026-04-16T02:37:10.928514063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 02:37:10.931400 containerd[1578]: time="2026-04-16T02:37:10.931383772Z" level=info msg="CreateContainer within sandbox \"7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 02:37:10.933825 kubelet[2739]: E0416 02:37:10.933800 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:10.940806 containerd[1578]: time="2026-04-16T02:37:10.940274484Z" level=info msg="Container 6c25e5547d5e0b9c461f3439c976fe00a60b289822728e268e2c590fa1d411c6: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:10.944683 kubelet[2739]: I0416 02:37:10.944631 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gq4gt" podStartSLOduration=38.944598524 podStartE2EDuration="38.944598524s" podCreationTimestamp="2026-04-16 02:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 02:37:10.943231794 +0000 UTC m=+45.221125705" watchObservedRunningTime="2026-04-16 02:37:10.944598524 +0000 UTC m=+45.222492423" Apr 16 02:37:10.950614 containerd[1578]: time="2026-04-16T02:37:10.950578932Z" level=info msg="CreateContainer within sandbox \"7d6ca83492adaa0b993e9a1b9e9495f6565b410b1e42dfe31b56c0d37e3199b4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6c25e5547d5e0b9c461f3439c976fe00a60b289822728e268e2c590fa1d411c6\"" Apr 16 02:37:10.951658 containerd[1578]: time="2026-04-16T02:37:10.951510256Z" level=info msg="StartContainer for \"6c25e5547d5e0b9c461f3439c976fe00a60b289822728e268e2c590fa1d411c6\"" Apr 16 02:37:10.954281 containerd[1578]: time="2026-04-16T02:37:10.953096540Z" level=info msg="connecting to shim 6c25e5547d5e0b9c461f3439c976fe00a60b289822728e268e2c590fa1d411c6" address="unix:///run/containerd/s/269c965057782ed177d9e8472459bad6699eb037287a7280e1f80b5066c82d2f" protocol=ttrpc version=3 Apr 16 02:37:10.982342 systemd[1]: Started cri-containerd-6c25e5547d5e0b9c461f3439c976fe00a60b289822728e268e2c590fa1d411c6.scope - libcontainer container 6c25e5547d5e0b9c461f3439c976fe00a60b289822728e268e2c590fa1d411c6. Apr 16 02:37:11.022767 containerd[1578]: time="2026-04-16T02:37:11.022730642Z" level=info msg="StartContainer for \"6c25e5547d5e0b9c461f3439c976fe00a60b289822728e268e2c590fa1d411c6\" returns successfully" Apr 16 02:37:11.731350 systemd-networkd[1492]: cali0864cde9a1d: Gained IPv6LL Apr 16 02:37:11.923436 systemd-networkd[1492]: cali007f88d6622: Gained IPv6LL Apr 16 02:37:11.941899 kubelet[2739]: I0416 02:37:11.941857 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:37:11.942363 kubelet[2739]: E0416 02:37:11.942328 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:11.951166 kubelet[2739]: I0416 02:37:11.950642 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5d54b44d94-vh4sj" podStartSLOduration=29.475087026 podStartE2EDuration="31.950627575s" podCreationTimestamp="2026-04-16 02:36:40 +0000 UTC" firstStartedPulling="2026-04-16 02:37:08.029064358 +0000 UTC m=+42.306958259" lastFinishedPulling="2026-04-16 02:37:10.504604909 +0000 UTC m=+44.782498808" observedRunningTime="2026-04-16 02:37:10.970881226 +0000 UTC m=+45.248775136" watchObservedRunningTime="2026-04-16 02:37:11.950627575 +0000 UTC m=+46.228521474" Apr 16 02:37:12.138815 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:43764.service - OpenSSH per-connection server daemon (10.0.0.1:43764). Apr 16 02:37:12.197898 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 43764 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:12.199465 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:12.203845 systemd-logind[1564]: New session 10 of user core. Apr 16 02:37:12.211404 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 02:37:12.294706 sshd[5308]: Connection closed by 10.0.0.1 port 43764 Apr 16 02:37:12.295055 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:12.298301 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:43764.service: Deactivated successfully. Apr 16 02:37:12.299674 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 02:37:12.300264 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Apr 16 02:37:12.301152 systemd-logind[1564]: Removed session 10. Apr 16 02:37:12.944303 kubelet[2739]: I0416 02:37:12.944249 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:37:12.944699 kubelet[2739]: E0416 02:37:12.944560 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 02:37:13.342299 kubelet[2739]: I0416 02:37:13.342261 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:37:13.362404 kubelet[2739]: I0416 02:37:13.362350 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5d54b44d94-4wngf" podStartSLOduration=30.531980726 podStartE2EDuration="33.362332594s" podCreationTimestamp="2026-04-16 02:36:40 +0000 UTC" firstStartedPulling="2026-04-16 02:37:08.097918038 +0000 UTC m=+42.375811937" lastFinishedPulling="2026-04-16 02:37:10.928269906 +0000 UTC m=+45.206163805" observedRunningTime="2026-04-16 02:37:11.950866686 +0000 UTC m=+46.228760596" watchObservedRunningTime="2026-04-16 02:37:13.362332594 +0000 UTC m=+47.640226504" Apr 16 02:37:13.370646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087021933.mount: Deactivated successfully. Apr 16 02:37:13.651451 containerd[1578]: time="2026-04-16T02:37:13.651336517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:13.651965 containerd[1578]: time="2026-04-16T02:37:13.651919081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 02:37:13.652660 containerd[1578]: time="2026-04-16T02:37:13.652626022Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:13.654208 containerd[1578]: time="2026-04-16T02:37:13.654181479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:13.654750 containerd[1578]: time="2026-04-16T02:37:13.654719321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.726182142s" Apr 16 02:37:13.654750 containerd[1578]: time="2026-04-16T02:37:13.654749642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 02:37:13.655559 containerd[1578]: time="2026-04-16T02:37:13.655545018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 02:37:13.657777 containerd[1578]: time="2026-04-16T02:37:13.657758068Z" level=info msg="CreateContainer within sandbox \"3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 02:37:13.664500 containerd[1578]: time="2026-04-16T02:37:13.664464996Z" level=info msg="Container 83e21e696309f8320db3256c33df957cebd98fd223344ef95c01360db6b4dc28: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:13.670067 containerd[1578]: time="2026-04-16T02:37:13.670032439Z" level=info msg="CreateContainer within sandbox \"3b01b8418e322c3e986e3191bea1d95f4058356cc802da606835244f7f147080\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"83e21e696309f8320db3256c33df957cebd98fd223344ef95c01360db6b4dc28\"" Apr 16 02:37:13.670413 containerd[1578]: time="2026-04-16T02:37:13.670392921Z" level=info msg="StartContainer for \"83e21e696309f8320db3256c33df957cebd98fd223344ef95c01360db6b4dc28\"" Apr 16 02:37:13.671175 containerd[1578]: time="2026-04-16T02:37:13.671154101Z" level=info msg="connecting to shim 83e21e696309f8320db3256c33df957cebd98fd223344ef95c01360db6b4dc28" address="unix:///run/containerd/s/45bfbbcabf2796332a77a5a5fcb513155f7e426e0e098a9bdaff6e2408d9b1f8" protocol=ttrpc version=3 Apr 16 02:37:13.685274 systemd[1]: Started cri-containerd-83e21e696309f8320db3256c33df957cebd98fd223344ef95c01360db6b4dc28.scope - libcontainer container 83e21e696309f8320db3256c33df957cebd98fd223344ef95c01360db6b4dc28. Apr 16 02:37:13.748806 containerd[1578]: time="2026-04-16T02:37:13.748743631Z" level=info msg="StartContainer for \"83e21e696309f8320db3256c33df957cebd98fd223344ef95c01360db6b4dc28\" returns successfully" Apr 16 02:37:13.957242 kubelet[2739]: I0416 02:37:13.957072 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-h7vtx" podStartSLOduration=28.502785033 podStartE2EDuration="33.957054594s" podCreationTimestamp="2026-04-16 02:36:40 +0000 UTC" firstStartedPulling="2026-04-16 02:37:08.201070014 +0000 UTC m=+42.478963913" lastFinishedPulling="2026-04-16 02:37:13.655339575 +0000 UTC m=+47.933233474" observedRunningTime="2026-04-16 02:37:13.956704466 +0000 UTC m=+48.234598374" watchObservedRunningTime="2026-04-16 02:37:13.957054594 +0000 UTC m=+48.234948505" Apr 16 02:37:14.950032 kubelet[2739]: I0416 02:37:14.949977 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:37:15.554397 containerd[1578]: time="2026-04-16T02:37:15.554345409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:15.554966 containerd[1578]: time="2026-04-16T02:37:15.554859292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 02:37:15.555751 containerd[1578]: time="2026-04-16T02:37:15.555706186Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:15.557299 containerd[1578]: time="2026-04-16T02:37:15.557276353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:15.557720 containerd[1578]: time="2026-04-16T02:37:15.557688244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.901942101s" Apr 16 02:37:15.557720 containerd[1578]: time="2026-04-16T02:37:15.557715173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 02:37:15.558542 containerd[1578]: time="2026-04-16T02:37:15.558507284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 02:37:15.562249 containerd[1578]: time="2026-04-16T02:37:15.562212075Z" level=info msg="CreateContainer within sandbox \"1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 02:37:15.569121 containerd[1578]: time="2026-04-16T02:37:15.569095585Z" level=info msg="Container 0eb4c103345811fca89e71ee41633b5cf30e1bde5aa2b55e7a5c328b33544373: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:15.575188 containerd[1578]: time="2026-04-16T02:37:15.575164304Z" level=info msg="CreateContainer within sandbox \"1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0eb4c103345811fca89e71ee41633b5cf30e1bde5aa2b55e7a5c328b33544373\"" Apr 16 02:37:15.575677 containerd[1578]: time="2026-04-16T02:37:15.575654596Z" level=info msg="StartContainer for \"0eb4c103345811fca89e71ee41633b5cf30e1bde5aa2b55e7a5c328b33544373\"" Apr 16 02:37:15.576697 containerd[1578]: time="2026-04-16T02:37:15.576675512Z" level=info msg="connecting to shim 0eb4c103345811fca89e71ee41633b5cf30e1bde5aa2b55e7a5c328b33544373" address="unix:///run/containerd/s/b404bddbd8ba614609a2863188d6465706804f42c4ee92d6f547f71f74dd1e82" protocol=ttrpc version=3 Apr 16 02:37:15.595311 systemd[1]: Started cri-containerd-0eb4c103345811fca89e71ee41633b5cf30e1bde5aa2b55e7a5c328b33544373.scope - libcontainer container 0eb4c103345811fca89e71ee41633b5cf30e1bde5aa2b55e7a5c328b33544373. Apr 16 02:37:15.649238 containerd[1578]: time="2026-04-16T02:37:15.649190440Z" level=info msg="StartContainer for \"0eb4c103345811fca89e71ee41633b5cf30e1bde5aa2b55e7a5c328b33544373\" returns successfully" Apr 16 02:37:17.305051 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:36344.service - OpenSSH per-connection server daemon (10.0.0.1:36344). Apr 16 02:37:17.366796 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 36344 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:17.368209 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:17.374287 systemd-logind[1564]: New session 11 of user core. Apr 16 02:37:17.378486 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 02:37:17.461470 sshd[5450]: Connection closed by 10.0.0.1 port 36344 Apr 16 02:37:17.461756 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:17.470888 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:36344.service: Deactivated successfully. Apr 16 02:37:17.472321 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 02:37:17.472908 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Apr 16 02:37:17.474532 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:36358.service - OpenSSH per-connection server daemon (10.0.0.1:36358). Apr 16 02:37:17.475099 systemd-logind[1564]: Removed session 11. Apr 16 02:37:17.520242 sshd[5465]: Accepted publickey for core from 10.0.0.1 port 36358 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:17.521111 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:17.524761 systemd-logind[1564]: New session 12 of user core. Apr 16 02:37:17.532286 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 02:37:17.618875 sshd[5468]: Connection closed by 10.0.0.1 port 36358 Apr 16 02:37:17.619074 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:17.631647 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:36358.service: Deactivated successfully. Apr 16 02:37:17.634188 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 02:37:17.638224 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Apr 16 02:37:17.641852 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:36366.service - OpenSSH per-connection server daemon (10.0.0.1:36366). Apr 16 02:37:17.643420 systemd-logind[1564]: Removed session 12. Apr 16 02:37:17.679529 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 36366 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:17.680693 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:17.685372 systemd-logind[1564]: New session 13 of user core. Apr 16 02:37:17.692553 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 02:37:17.758705 sshd[5483]: Connection closed by 10.0.0.1 port 36366 Apr 16 02:37:17.759027 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:17.761939 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:36366.service: Deactivated successfully. Apr 16 02:37:17.763357 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 02:37:17.764102 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Apr 16 02:37:17.765019 systemd-logind[1564]: Removed session 13. Apr 16 02:37:18.694146 containerd[1578]: time="2026-04-16T02:37:18.694051648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:18.694606 containerd[1578]: time="2026-04-16T02:37:18.694447913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 02:37:18.695344 containerd[1578]: time="2026-04-16T02:37:18.695308861Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:18.697468 containerd[1578]: time="2026-04-16T02:37:18.697408682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:18.697868 containerd[1578]: time="2026-04-16T02:37:18.697831825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.139297952s" Apr 16 02:37:18.697938 containerd[1578]: time="2026-04-16T02:37:18.697861611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 02:37:18.698839 containerd[1578]: time="2026-04-16T02:37:18.698793403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 02:37:18.710388 containerd[1578]: time="2026-04-16T02:37:18.710320404Z" level=info msg="CreateContainer within sandbox \"c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 02:37:18.715984 containerd[1578]: time="2026-04-16T02:37:18.715948429Z" level=info msg="Container c8cac2caa2c075af1dc60593ea44c81678434050095609be7513ea22af99b879: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:18.721778 containerd[1578]: time="2026-04-16T02:37:18.721739518Z" level=info msg="CreateContainer within sandbox \"c45b133ef108f3342cd7770133109587c39341e31fc6a912a77eeb64a34106d3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c8cac2caa2c075af1dc60593ea44c81678434050095609be7513ea22af99b879\"" Apr 16 02:37:18.722222 containerd[1578]: time="2026-04-16T02:37:18.722145130Z" level=info msg="StartContainer for \"c8cac2caa2c075af1dc60593ea44c81678434050095609be7513ea22af99b879\"" Apr 16 02:37:18.722873 containerd[1578]: time="2026-04-16T02:37:18.722852709Z" level=info msg="connecting to shim c8cac2caa2c075af1dc60593ea44c81678434050095609be7513ea22af99b879" address="unix:///run/containerd/s/105f34d1a1cfa361f15010332b74093d228dca89be03934d420ad431a538b6bc" protocol=ttrpc version=3 Apr 16 02:37:18.741288 systemd[1]: Started cri-containerd-c8cac2caa2c075af1dc60593ea44c81678434050095609be7513ea22af99b879.scope - libcontainer container c8cac2caa2c075af1dc60593ea44c81678434050095609be7513ea22af99b879. Apr 16 02:37:18.778559 containerd[1578]: time="2026-04-16T02:37:18.778490284Z" level=info msg="StartContainer for \"c8cac2caa2c075af1dc60593ea44c81678434050095609be7513ea22af99b879\" returns successfully" Apr 16 02:37:18.976652 kubelet[2739]: I0416 02:37:18.976406 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c6c55d9c-kstss" podStartSLOduration=30.444169244 podStartE2EDuration="38.976395921s" podCreationTimestamp="2026-04-16 02:36:40 +0000 UTC" firstStartedPulling="2026-04-16 02:37:10.166460912 +0000 UTC m=+44.444354811" lastFinishedPulling="2026-04-16 02:37:18.698687589 +0000 UTC m=+52.976581488" observedRunningTime="2026-04-16 02:37:18.976113131 +0000 UTC m=+53.254007042" watchObservedRunningTime="2026-04-16 02:37:18.976395921 +0000 UTC m=+53.254289830" Apr 16 02:37:20.155417 containerd[1578]: time="2026-04-16T02:37:20.155351371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:20.156032 containerd[1578]: time="2026-04-16T02:37:20.156001624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 02:37:20.156736 containerd[1578]: time="2026-04-16T02:37:20.156697898Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:20.158942 containerd[1578]: time="2026-04-16T02:37:20.158890455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 02:37:20.159455 containerd[1578]: time="2026-04-16T02:37:20.159429788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.460614323s" Apr 16 02:37:20.159530 containerd[1578]: time="2026-04-16T02:37:20.159456475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 02:37:20.163364 containerd[1578]: time="2026-04-16T02:37:20.163328489Z" level=info msg="CreateContainer within sandbox \"1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 02:37:20.171396 containerd[1578]: time="2026-04-16T02:37:20.171371174Z" level=info msg="Container f5bb8baa995ff052d5aee1f081b9833e097006e4a84913edc79dcbce08e46598: CDI devices from CRI Config.CDIDevices: []" Apr 16 02:37:20.178186 containerd[1578]: time="2026-04-16T02:37:20.178116326Z" level=info msg="CreateContainer within sandbox \"1bd7fb9c4548998d44a8a09ed4c2acf6d756a1480cc332cbf19a1b6fcd55329e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f5bb8baa995ff052d5aee1f081b9833e097006e4a84913edc79dcbce08e46598\"" Apr 16 02:37:20.178605 containerd[1578]: time="2026-04-16T02:37:20.178590357Z" level=info msg="StartContainer for \"f5bb8baa995ff052d5aee1f081b9833e097006e4a84913edc79dcbce08e46598\"" Apr 16 02:37:20.179678 containerd[1578]: time="2026-04-16T02:37:20.179661013Z" level=info msg="connecting to shim f5bb8baa995ff052d5aee1f081b9833e097006e4a84913edc79dcbce08e46598" address="unix:///run/containerd/s/b404bddbd8ba614609a2863188d6465706804f42c4ee92d6f547f71f74dd1e82" protocol=ttrpc version=3 Apr 16 02:37:20.198273 systemd[1]: Started cri-containerd-f5bb8baa995ff052d5aee1f081b9833e097006e4a84913edc79dcbce08e46598.scope - libcontainer container f5bb8baa995ff052d5aee1f081b9833e097006e4a84913edc79dcbce08e46598. Apr 16 02:37:20.254925 containerd[1578]: time="2026-04-16T02:37:20.254876706Z" level=info msg="StartContainer for \"f5bb8baa995ff052d5aee1f081b9833e097006e4a84913edc79dcbce08e46598\" returns successfully" Apr 16 02:37:20.351671 kubelet[2739]: I0416 02:37:20.351607 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:37:20.848254 kubelet[2739]: I0416 02:37:20.848199 2739 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 02:37:20.849215 kubelet[2739]: I0416 02:37:20.849198 2739 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 02:37:22.772255 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:36370.service - OpenSSH per-connection server daemon (10.0.0.1:36370). Apr 16 02:37:22.830597 sshd[5676]: Accepted publickey for core from 10.0.0.1 port 36370 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:22.832199 sshd-session[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:22.835580 systemd-logind[1564]: New session 14 of user core. Apr 16 02:37:22.843451 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 02:37:22.930638 sshd[5680]: Connection closed by 10.0.0.1 port 36370 Apr 16 02:37:22.930951 sshd-session[5676]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:22.941099 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:36370.service: Deactivated successfully. Apr 16 02:37:22.942691 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 02:37:22.943562 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Apr 16 02:37:22.945400 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:36374.service - OpenSSH per-connection server daemon (10.0.0.1:36374). Apr 16 02:37:22.945886 systemd-logind[1564]: Removed session 14. Apr 16 02:37:22.997567 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 36374 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:22.998859 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:23.003954 systemd-logind[1564]: New session 15 of user core. Apr 16 02:37:23.011318 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 02:37:23.153013 sshd[5697]: Connection closed by 10.0.0.1 port 36374 Apr 16 02:37:23.153459 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:23.162862 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:36374.service: Deactivated successfully. Apr 16 02:37:23.164220 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 02:37:23.164791 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Apr 16 02:37:23.166538 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:36380.service - OpenSSH per-connection server daemon (10.0.0.1:36380). Apr 16 02:37:23.167380 systemd-logind[1564]: Removed session 15. Apr 16 02:37:23.217909 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 36380 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:23.218719 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:23.222514 systemd-logind[1564]: New session 16 of user core. Apr 16 02:37:23.232380 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 02:37:23.790102 sshd[5712]: Connection closed by 10.0.0.1 port 36380 Apr 16 02:37:23.791248 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:23.805546 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:36390.service - OpenSSH per-connection server daemon (10.0.0.1:36390). Apr 16 02:37:23.805883 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:36380.service: Deactivated successfully. Apr 16 02:37:23.809344 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 02:37:23.812609 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Apr 16 02:37:23.814333 systemd-logind[1564]: Removed session 16. Apr 16 02:37:23.854415 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 36390 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:23.855521 sshd-session[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:23.859501 systemd-logind[1564]: New session 17 of user core. Apr 16 02:37:23.869265 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 02:37:24.063587 sshd[5740]: Connection closed by 10.0.0.1 port 36390 Apr 16 02:37:24.064914 sshd-session[5734]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:24.070763 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:36390.service: Deactivated successfully. Apr 16 02:37:24.072022 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 02:37:24.073284 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Apr 16 02:37:24.076557 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:36400.service - OpenSSH per-connection server daemon (10.0.0.1:36400). Apr 16 02:37:24.077704 systemd-logind[1564]: Removed session 17. Apr 16 02:37:24.121264 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 36400 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:24.122286 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:24.125738 systemd-logind[1564]: New session 18 of user core. Apr 16 02:37:24.133292 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 02:37:24.198926 sshd[5754]: Connection closed by 10.0.0.1 port 36400 Apr 16 02:37:24.199233 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:24.202705 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:36400.service: Deactivated successfully. Apr 16 02:37:24.205491 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 02:37:24.206299 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Apr 16 02:37:24.207246 systemd-logind[1564]: Removed session 18. Apr 16 02:37:27.964178 kubelet[2739]: I0416 02:37:27.964086 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m5mwv" podStartSLOduration=36.08784144 podStartE2EDuration="47.964064326s" podCreationTimestamp="2026-04-16 02:36:40 +0000 UTC" firstStartedPulling="2026-04-16 02:37:08.283997446 +0000 UTC m=+42.561891345" lastFinishedPulling="2026-04-16 02:37:20.160220332 +0000 UTC m=+54.438114231" observedRunningTime="2026-04-16 02:37:20.988371237 +0000 UTC m=+55.266265161" watchObservedRunningTime="2026-04-16 02:37:27.964064326 +0000 UTC m=+62.241958234" Apr 16 02:37:29.214668 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:38730.service - OpenSSH per-connection server daemon (10.0.0.1:38730). Apr 16 02:37:29.253768 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 38730 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:29.254575 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:29.257923 systemd-logind[1564]: New session 19 of user core. Apr 16 02:37:29.265257 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 02:37:29.329201 sshd[5804]: Connection closed by 10.0.0.1 port 38730 Apr 16 02:37:29.329465 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:29.332022 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:38730.service: Deactivated successfully. Apr 16 02:37:29.333298 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 02:37:29.333942 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Apr 16 02:37:29.334644 systemd-logind[1564]: Removed session 19. Apr 16 02:37:30.542703 kubelet[2739]: I0416 02:37:30.542636 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 02:37:34.341289 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:38746.service - OpenSSH per-connection server daemon (10.0.0.1:38746). Apr 16 02:37:34.410293 sshd[5833]: Accepted publickey for core from 10.0.0.1 port 38746 ssh2: RSA SHA256:8h4EkZhQ7tIDzYs1kbcibhAFDUjZA8P1b6vE131TW6U Apr 16 02:37:34.411268 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 02:37:34.414891 systemd-logind[1564]: New session 20 of user core. Apr 16 02:37:34.424292 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 02:37:34.571923 sshd[5836]: Connection closed by 10.0.0.1 port 38746 Apr 16 02:37:34.572242 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Apr 16 02:37:34.574991 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:38746.service: Deactivated successfully. Apr 16 02:37:34.576392 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 02:37:34.576963 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Apr 16 02:37:34.577880 systemd-logind[1564]: Removed session 20. Apr 16 02:37:34.794817 kubelet[2739]: E0416 02:37:34.794797 2739 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"