Apr 24 01:04:13.591328 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 23 22:08:58 -00 2026 Apr 24 01:04:13.591348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 01:04:13.591356 kernel: BIOS-provided physical RAM map: Apr 24 01:04:13.591363 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 01:04:13.591369 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 24 01:04:13.591374 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 24 01:04:13.591381 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Apr 24 01:04:13.591387 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 24 01:04:13.591392 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 24 01:04:13.591398 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 24 01:04:13.591404 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Apr 24 01:04:13.591409 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 24 01:04:13.591485 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 24 01:04:13.591492 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 24 01:04:13.591499 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 24 01:04:13.591505 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 24 01:04:13.591511 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 24 01:04:13.591518 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 24 01:04:13.591524 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 24 01:04:13.591529 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 24 01:04:13.591535 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 24 01:04:13.591541 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 24 01:04:13.591547 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 24 01:04:13.591553 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 01:04:13.591559 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 24 01:04:13.591564 kernel: NX (Execute Disable) protection: active Apr 24 01:04:13.591570 kernel: APIC: Static calls initialized Apr 24 01:04:13.591576 kernel: e820: update [mem 0x9b31e018-0x9b327c57] usable ==> usable Apr 24 01:04:13.591583 kernel: e820: update [mem 0x9b2e1018-0x9b31de57] usable ==> usable Apr 24 01:04:13.591589 kernel: extended physical RAM map: Apr 24 01:04:13.591595 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 01:04:13.591600 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Apr 24 01:04:13.591604 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Apr 24 01:04:13.591609 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Apr 24 01:04:13.591614 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Apr 24 01:04:13.591618 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Apr 24 01:04:13.591623 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Apr 24 01:04:13.591628 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e1017] usable Apr 24 01:04:13.591632 kernel: reserve setup_data: [mem 0x000000009b2e1018-0x000000009b31de57] usable Apr 24 01:04:13.591638 kernel: reserve setup_data: [mem 0x000000009b31de58-0x000000009b31e017] usable Apr 24 01:04:13.591645 kernel: reserve setup_data: [mem 0x000000009b31e018-0x000000009b327c57] usable Apr 24 01:04:13.591650 kernel: reserve setup_data: [mem 0x000000009b327c58-0x000000009bd3efff] usable Apr 24 01:04:13.591655 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Apr 24 01:04:13.591660 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Apr 24 01:04:13.591666 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Apr 24 01:04:13.591671 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Apr 24 01:04:13.591676 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Apr 24 01:04:13.591681 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Apr 24 01:04:13.591686 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Apr 24 01:04:13.591691 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Apr 24 01:04:13.591696 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Apr 24 01:04:13.591701 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Apr 24 01:04:13.591706 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Apr 24 01:04:13.591711 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 24 01:04:13.591716 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 01:04:13.591722 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 24 01:04:13.591727 kernel: efi: EFI v2.7 by EDK II Apr 24 01:04:13.591732 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Apr 24 01:04:13.591737 kernel: random: crng init done Apr 24 01:04:13.591742 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 24 01:04:13.591747 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 24 01:04:13.591752 kernel: secureboot: Secure boot disabled Apr 24 01:04:13.591757 kernel: SMBIOS 2.8 present. Apr 24 01:04:13.591762 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 24 01:04:13.591767 kernel: DMI: Memory slots populated: 1/1 Apr 24 01:04:13.591771 kernel: Hypervisor detected: KVM Apr 24 01:04:13.591777 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 24 01:04:13.591783 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 01:04:13.591788 kernel: kvm-clock: using sched offset of 6950733239 cycles Apr 24 01:04:13.591794 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 01:04:13.591799 kernel: tsc: Detected 2793.438 MHz processor Apr 24 01:04:13.591804 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 01:04:13.591809 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 01:04:13.591814 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x10000000000 Apr 24 01:04:13.591820 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 01:04:13.591825 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 01:04:13.591831 kernel: Using GB pages for direct mapping Apr 24 01:04:13.591836 kernel: ACPI: Early table checksum verification disabled Apr 24 01:04:13.591842 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Apr 24 01:04:13.591847 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 24 01:04:13.591852 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 01:04:13.591857 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 01:04:13.591862 kernel: ACPI: FACS 0x000000009CBDD000 000040 Apr 24 01:04:13.591867 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 01:04:13.591872 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 01:04:13.591879 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 01:04:13.591884 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 01:04:13.591889 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 24 01:04:13.591894 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Apr 24 01:04:13.591899 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Apr 24 01:04:13.591904 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Apr 24 01:04:13.591909 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Apr 24 01:04:13.591914 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Apr 24 01:04:13.591919 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Apr 24 01:04:13.591926 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Apr 24 01:04:13.591930 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Apr 24 01:04:13.591936 kernel: No NUMA configuration found Apr 24 01:04:13.591941 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Apr 24 01:04:13.591946 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Apr 24 01:04:13.591951 kernel: Zone ranges: Apr 24 01:04:13.591956 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 01:04:13.591961 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Apr 24 01:04:13.591966 kernel: Normal empty Apr 24 01:04:13.591973 kernel: Device empty Apr 24 01:04:13.591978 kernel: Movable zone start for each node Apr 24 01:04:13.591983 kernel: Early memory node ranges Apr 24 01:04:13.591988 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 01:04:13.591993 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Apr 24 01:04:13.591998 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Apr 24 01:04:13.592003 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Apr 24 01:04:13.592008 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Apr 24 01:04:13.592013 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Apr 24 01:04:13.592018 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Apr 24 01:04:13.592024 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Apr 24 01:04:13.592029 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Apr 24 01:04:13.592035 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 01:04:13.592040 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 01:04:13.592045 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Apr 24 01:04:13.592055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 01:04:13.592061 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Apr 24 01:04:13.592067 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 24 01:04:13.592072 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 24 01:04:13.592078 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 24 01:04:13.592084 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Apr 24 01:04:13.592090 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 01:04:13.592096 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 01:04:13.592102 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 01:04:13.592107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 01:04:13.592113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 01:04:13.592120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 01:04:13.592126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 01:04:13.592131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 01:04:13.592137 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 01:04:13.592142 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 01:04:13.592148 kernel: TSC deadline timer available Apr 24 01:04:13.592308 kernel: CPU topo: Max. logical packages: 1 Apr 24 01:04:13.592314 kernel: CPU topo: Max. logical dies: 1 Apr 24 01:04:13.592320 kernel: CPU topo: Max. dies per package: 1 Apr 24 01:04:13.592325 kernel: CPU topo: Max. threads per core: 1 Apr 24 01:04:13.592333 kernel: CPU topo: Num. cores per package: 4 Apr 24 01:04:13.592339 kernel: CPU topo: Num. threads per package: 4 Apr 24 01:04:13.592345 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 24 01:04:13.592350 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 01:04:13.592356 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 01:04:13.592362 kernel: kvm-guest: setup PV sched yield Apr 24 01:04:13.592367 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 24 01:04:13.592373 kernel: Booting paravirtualized kernel on KVM Apr 24 01:04:13.592379 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 01:04:13.592386 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 24 01:04:13.592392 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 24 01:04:13.592398 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 24 01:04:13.592404 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 24 01:04:13.592409 kernel: kvm-guest: PV spinlocks enabled Apr 24 01:04:13.592415 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 01:04:13.592492 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 01:04:13.592498 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 01:04:13.592506 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 01:04:13.592511 kernel: Fallback order for Node 0: 0 Apr 24 01:04:13.592517 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Apr 24 01:04:13.592523 kernel: Policy zone: DMA32 Apr 24 01:04:13.592528 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 01:04:13.592534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 24 01:04:13.592539 kernel: ftrace: allocating 40126 entries in 157 pages Apr 24 01:04:13.592545 kernel: ftrace: allocated 157 pages with 5 groups Apr 24 01:04:13.592551 kernel: Dynamic Preempt: voluntary Apr 24 01:04:13.592558 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 01:04:13.592564 kernel: rcu: RCU event tracing is enabled. Apr 24 01:04:13.592570 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 24 01:04:13.592576 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 01:04:13.592581 kernel: Rude variant of Tasks RCU enabled. Apr 24 01:04:13.592587 kernel: Tracing variant of Tasks RCU enabled. Apr 24 01:04:13.592593 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 01:04:13.592598 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 24 01:04:13.592604 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 01:04:13.592611 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 01:04:13.592617 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 24 01:04:13.592622 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 24 01:04:13.592628 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 01:04:13.592634 kernel: Console: colour dummy device 80x25 Apr 24 01:04:13.592639 kernel: printk: legacy console [ttyS0] enabled Apr 24 01:04:13.592645 kernel: ACPI: Core revision 20240827 Apr 24 01:04:13.592651 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 01:04:13.592656 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 01:04:13.592663 kernel: x2apic enabled Apr 24 01:04:13.592669 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 01:04:13.592675 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 01:04:13.592680 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 01:04:13.592686 kernel: kvm-guest: setup PV IPIs Apr 24 01:04:13.592691 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 01:04:13.592697 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 01:04:13.592703 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 24 01:04:13.592709 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 01:04:13.592716 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 24 01:04:13.592721 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 24 01:04:13.592727 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 01:04:13.592733 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 01:04:13.592738 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 01:04:13.592744 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 24 01:04:13.592750 kernel: RETBleed: Vulnerable Apr 24 01:04:13.592756 kernel: Speculative Store Bypass: Vulnerable Apr 24 01:04:13.592762 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 01:04:13.592769 kernel: GDS: Unknown: Dependent on hypervisor status Apr 24 01:04:13.592774 kernel: active return thunk: its_return_thunk Apr 24 01:04:13.592780 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 24 01:04:13.592786 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 01:04:13.592791 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 01:04:13.592797 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 01:04:13.592803 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 01:04:13.592808 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 01:04:13.592814 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 01:04:13.592821 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 01:04:13.592826 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 01:04:13.592832 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 01:04:13.592838 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 01:04:13.592843 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 24 01:04:13.592849 kernel: Freeing SMP alternatives memory: 32K Apr 24 01:04:13.592855 kernel: pid_max: default: 32768 minimum: 301 Apr 24 01:04:13.592860 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 24 01:04:13.592866 kernel: landlock: Up and running. Apr 24 01:04:13.592873 kernel: SELinux: Initializing. Apr 24 01:04:13.592879 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 01:04:13.592884 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 01:04:13.592890 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 24 01:04:13.592896 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 24 01:04:13.592902 kernel: signal: max sigframe size: 3632 Apr 24 01:04:13.592907 kernel: rcu: Hierarchical SRCU implementation. Apr 24 01:04:13.592913 kernel: rcu: Max phase no-delay instances is 400. Apr 24 01:04:13.592919 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 24 01:04:13.592925 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 24 01:04:13.592931 kernel: smp: Bringing up secondary CPUs ... Apr 24 01:04:13.592936 kernel: smpboot: x86: Booting SMP configuration: Apr 24 01:04:13.592942 kernel: .... node #0, CPUs: #1 #2 #3 Apr 24 01:04:13.592947 kernel: smp: Brought up 1 node, 4 CPUs Apr 24 01:04:13.592953 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 24 01:04:13.592959 kernel: Memory: 2374700K/2565800K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 185212K reserved, 0K cma-reserved) Apr 24 01:04:13.592965 kernel: devtmpfs: initialized Apr 24 01:04:13.592970 kernel: x86/mm: Memory block size: 128MB Apr 24 01:04:13.592977 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Apr 24 01:04:13.592983 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Apr 24 01:04:13.592988 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Apr 24 01:04:13.592994 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Apr 24 01:04:13.593000 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Apr 24 01:04:13.593005 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Apr 24 01:04:13.593011 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 01:04:13.593017 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 24 01:04:13.593023 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 01:04:13.593029 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 01:04:13.593035 kernel: audit: initializing netlink subsys (disabled) Apr 24 01:04:13.593040 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 01:04:13.593046 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 01:04:13.593052 kernel: audit: type=2000 audit(1776992646.654:1): state=initialized audit_enabled=0 res=1 Apr 24 01:04:13.593057 kernel: cpuidle: using governor menu Apr 24 01:04:13.593063 kernel: efi: Freeing EFI boot services memory: 38812K Apr 24 01:04:13.593068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 01:04:13.593075 kernel: dca service started, version 1.12.1 Apr 24 01:04:13.593081 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 24 01:04:13.593087 kernel: PCI: Using configuration type 1 for base access Apr 24 01:04:13.593092 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 01:04:13.593098 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 01:04:13.593104 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 01:04:13.593109 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 01:04:13.593115 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 01:04:13.593120 kernel: ACPI: Added _OSI(Module Device) Apr 24 01:04:13.593127 kernel: ACPI: Added _OSI(Processor Device) Apr 24 01:04:13.593133 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 01:04:13.593138 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 01:04:13.593144 kernel: ACPI: Interpreter enabled Apr 24 01:04:13.593150 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 01:04:13.593275 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 01:04:13.593281 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 01:04:13.593286 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 01:04:13.593292 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 01:04:13.593299 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 01:04:13.593408 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 01:04:13.593541 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 01:04:13.593594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 01:04:13.593601 kernel: PCI host bridge to bus 0000:00 Apr 24 01:04:13.593655 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 01:04:13.593702 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 01:04:13.593751 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 01:04:13.593796 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 24 01:04:13.593842 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 24 01:04:13.593887 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 24 01:04:13.593933 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 01:04:13.593997 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 24 01:04:13.594058 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 24 01:04:13.594111 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 24 01:04:13.594295 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 24 01:04:13.594350 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 24 01:04:13.594402 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 01:04:13.594527 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 10742 usecs Apr 24 01:04:13.594586 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 24 01:04:13.594642 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 24 01:04:13.594696 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 24 01:04:13.594748 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 24 01:04:13.594805 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 24 01:04:13.594858 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 24 01:04:13.594910 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 24 01:04:13.594962 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 24 01:04:13.595023 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 24 01:04:13.595075 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 24 01:04:13.595129 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 24 01:04:13.595310 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 24 01:04:13.595365 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 24 01:04:13.595502 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 24 01:04:13.595562 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 01:04:13.595613 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 11718 usecs Apr 24 01:04:13.595670 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 24 01:04:13.595722 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 24 01:04:13.595774 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 24 01:04:13.595830 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 24 01:04:13.595881 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 24 01:04:13.595891 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 01:04:13.595896 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 01:04:13.595902 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 01:04:13.595908 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 01:04:13.595914 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 01:04:13.595919 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 01:04:13.595925 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 01:04:13.595931 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 01:04:13.595937 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 01:04:13.595944 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 01:04:13.595950 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 01:04:13.595955 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 01:04:13.595961 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 01:04:13.595967 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 01:04:13.595973 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 01:04:13.595978 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 01:04:13.595984 kernel: iommu: Default domain type: Translated Apr 24 01:04:13.595990 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 01:04:13.595997 kernel: efivars: Registered efivars operations Apr 24 01:04:13.596002 kernel: PCI: Using ACPI for IRQ routing Apr 24 01:04:13.596008 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 01:04:13.596014 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Apr 24 01:04:13.596020 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Apr 24 01:04:13.596025 kernel: e820: reserve RAM buffer [mem 0x9b2e1018-0x9bffffff] Apr 24 01:04:13.596031 kernel: e820: reserve RAM buffer [mem 0x9b31e018-0x9bffffff] Apr 24 01:04:13.596036 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Apr 24 01:04:13.596042 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Apr 24 01:04:13.596049 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Apr 24 01:04:13.596054 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Apr 24 01:04:13.596104 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 01:04:13.596285 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 01:04:13.596340 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 01:04:13.596348 kernel: vgaarb: loaded Apr 24 01:04:13.596354 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 01:04:13.596359 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 01:04:13.596367 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 01:04:13.596373 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 01:04:13.596379 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 01:04:13.596384 kernel: pnp: PnP ACPI init Apr 24 01:04:13.596515 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 24 01:04:13.596537 kernel: pnp: PnP ACPI: found 6 devices Apr 24 01:04:13.596544 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 01:04:13.596550 kernel: NET: Registered PF_INET protocol family Apr 24 01:04:13.596556 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 01:04:13.596563 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 01:04:13.596569 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 01:04:13.596575 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 01:04:13.596581 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 01:04:13.596587 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 01:04:13.596592 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 01:04:13.596598 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 01:04:13.596604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 01:04:13.596611 kernel: NET: Registered PF_XDP protocol family Apr 24 01:04:13.596664 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 24 01:04:13.596719 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 24 01:04:13.596769 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 01:04:13.596817 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 01:04:13.596864 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 01:04:13.596913 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 24 01:04:13.596959 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 24 01:04:13.597007 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 24 01:04:13.597015 kernel: PCI: CLS 0 bytes, default 64 Apr 24 01:04:13.597021 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 24 01:04:13.597028 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 24 01:04:13.597034 kernel: Initialise system trusted keyrings Apr 24 01:04:13.597041 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 01:04:13.597048 kernel: Key type asymmetric registered Apr 24 01:04:13.597053 kernel: Asymmetric key parser 'x509' registered Apr 24 01:04:13.597059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 01:04:13.597065 kernel: io scheduler mq-deadline registered Apr 24 01:04:13.597071 kernel: io scheduler kyber registered Apr 24 01:04:13.597077 kernel: io scheduler bfq registered Apr 24 01:04:13.597083 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 01:04:13.597089 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 01:04:13.597096 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 01:04:13.597102 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 24 01:04:13.597108 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 01:04:13.597114 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 01:04:13.597121 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 01:04:13.597126 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 01:04:13.597132 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 01:04:13.597371 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 24 01:04:13.597380 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 01:04:13.597501 kernel: rtc_cmos 00:04: registered as rtc0 Apr 24 01:04:13.597551 kernel: rtc_cmos 00:04: setting system clock to 2026-04-24T01:04:12 UTC (1776992652) Apr 24 01:04:13.597598 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 24 01:04:13.597605 kernel: intel_pstate: CPU model not supported Apr 24 01:04:13.597611 kernel: efifb: probing for efifb Apr 24 01:04:13.597617 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 24 01:04:13.597625 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 24 01:04:13.597631 kernel: efifb: scrolling: redraw Apr 24 01:04:13.597638 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 01:04:13.597644 kernel: Console: switching to colour frame buffer device 160x50 Apr 24 01:04:13.597650 kernel: fb0: EFI VGA frame buffer device Apr 24 01:04:13.597656 kernel: pstore: Using crash dump compression: deflate Apr 24 01:04:13.597662 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 01:04:13.597668 kernel: NET: Registered PF_INET6 protocol family Apr 24 01:04:13.597674 kernel: Segment Routing with IPv6 Apr 24 01:04:13.597680 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 01:04:13.597685 kernel: NET: Registered PF_PACKET protocol family Apr 24 01:04:13.597691 kernel: Key type dns_resolver registered Apr 24 01:04:13.597698 kernel: IPI shorthand broadcast: enabled Apr 24 01:04:13.597704 kernel: sched_clock: Marking stable (5201060672, 1758708711)->(7558893938, -599124555) Apr 24 01:04:13.597710 kernel: registered taskstats version 1 Apr 24 01:04:13.597716 kernel: Loading compiled-in X.509 certificates Apr 24 01:04:13.597722 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 09f9b319c99eb3f54e68ef799fdb2bce5b238ec0' Apr 24 01:04:13.597728 kernel: Demotion targets for Node 0: null Apr 24 01:04:13.597733 kernel: Key type .fscrypt registered Apr 24 01:04:13.597739 kernel: Key type fscrypt-provisioning registered Apr 24 01:04:13.597745 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 01:04:13.597752 kernel: ima: Allocated hash algorithm: sha1 Apr 24 01:04:13.597758 kernel: ima: No architecture policies found Apr 24 01:04:13.597764 kernel: clk: Disabling unused clocks Apr 24 01:04:13.597770 kernel: Warning: unable to open an initial console. Apr 24 01:04:13.597776 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 24 01:04:13.597782 kernel: Write protecting the kernel read-only data: 40960k Apr 24 01:04:13.597787 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 24 01:04:13.597793 kernel: Run /init as init process Apr 24 01:04:13.597799 kernel: with arguments: Apr 24 01:04:13.597806 kernel: /init Apr 24 01:04:13.597812 kernel: with environment: Apr 24 01:04:13.597818 kernel: HOME=/ Apr 24 01:04:13.597823 kernel: TERM=linux Apr 24 01:04:13.597830 systemd[1]: Successfully made /usr/ read-only. Apr 24 01:04:13.597839 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 01:04:13.597845 systemd[1]: Detected virtualization kvm. Apr 24 01:04:13.597852 systemd[1]: Detected architecture x86-64. Apr 24 01:04:13.597858 systemd[1]: Running in initrd. Apr 24 01:04:13.597865 systemd[1]: No hostname configured, using default hostname. Apr 24 01:04:13.597871 systemd[1]: Hostname set to . Apr 24 01:04:13.597877 systemd[1]: Initializing machine ID from VM UUID. Apr 24 01:04:13.597883 systemd[1]: Queued start job for default target initrd.target. Apr 24 01:04:13.597889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 01:04:13.597895 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 01:04:13.597902 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 01:04:13.597910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 01:04:13.597916 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 01:04:13.597923 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 01:04:13.597930 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 01:04:13.597936 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 01:04:13.597942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 01:04:13.597950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 01:04:13.597956 systemd[1]: Reached target paths.target - Path Units. Apr 24 01:04:13.597962 systemd[1]: Reached target slices.target - Slice Units. Apr 24 01:04:13.597968 systemd[1]: Reached target swap.target - Swaps. Apr 24 01:04:13.597975 systemd[1]: Reached target timers.target - Timer Units. Apr 24 01:04:13.597981 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 01:04:13.597987 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 01:04:13.597994 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 01:04:13.598000 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 24 01:04:13.598008 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 01:04:13.598014 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 01:04:13.598020 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 01:04:13.598026 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 01:04:13.598033 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 01:04:13.598039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 01:04:13.598045 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 01:04:13.598051 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 24 01:04:13.598059 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 01:04:13.598065 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 01:04:13.598071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 01:04:13.598090 systemd-journald[203]: Collecting audit messages is disabled. Apr 24 01:04:13.598107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 01:04:13.598115 systemd-journald[203]: Journal started Apr 24 01:04:13.598131 systemd-journald[203]: Runtime Journal (/run/log/journal/b32972db32874aa99e81df45910694b0) is 6M, max 48.1M, 42.1M free. Apr 24 01:04:13.616895 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 01:04:13.624739 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 01:04:13.630384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 01:04:13.634131 systemd-modules-load[205]: Inserted module 'overlay' Apr 24 01:04:13.642112 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 01:04:13.660353 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 01:04:13.678336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 01:04:13.737379 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 01:04:13.742953 systemd-modules-load[205]: Inserted module 'br_netfilter' Apr 24 01:04:13.749505 kernel: Bridge firewalling registered Apr 24 01:04:13.751771 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 01:04:13.760375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 01:04:13.776742 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 24 01:04:13.797704 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 01:04:13.808523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 01:04:13.837550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 01:04:13.844296 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 01:04:13.875786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 01:04:13.886630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 01:04:13.897331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 01:04:13.917821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 01:04:13.929353 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 01:04:13.961588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 01:04:13.979722 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 01:04:14.039102 systemd-resolved[239]: Positive Trust Anchors: Apr 24 01:04:14.039378 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 01:04:14.039402 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 01:04:14.041679 systemd-resolved[239]: Defaulting to hostname 'linux'. Apr 24 01:04:14.042617 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 01:04:14.050401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 01:04:14.250498 kernel: SCSI subsystem initialized Apr 24 01:04:14.264417 kernel: Loading iSCSI transport class v2.0-870. Apr 24 01:04:14.285543 kernel: iscsi: registered transport (tcp) Apr 24 01:04:14.317575 kernel: iscsi: registered transport (qla4xxx) Apr 24 01:04:14.317622 kernel: QLogic iSCSI HBA Driver Apr 24 01:04:14.358687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 01:04:14.398713 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 01:04:14.420093 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 01:04:14.495518 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 01:04:14.510652 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 01:04:14.614401 kernel: raid6: avx512x4 gen() 28885 MB/s Apr 24 01:04:14.635382 kernel: raid6: avx512x2 gen() 28746 MB/s Apr 24 01:04:14.656394 kernel: raid6: avx512x1 gen() 32211 MB/s Apr 24 01:04:14.677396 kernel: raid6: avx2x4 gen() 27477 MB/s Apr 24 01:04:14.698338 kernel: raid6: avx2x2 gen() 27303 MB/s Apr 24 01:04:14.724393 kernel: raid6: avx2x1 gen() 15588 MB/s Apr 24 01:04:14.724512 kernel: raid6: using algorithm avx512x1 gen() 32211 MB/s Apr 24 01:04:14.751606 kernel: raid6: .... xor() 17218 MB/s, rmw enabled Apr 24 01:04:14.751680 kernel: raid6: using avx512x2 recovery algorithm Apr 24 01:04:14.785513 kernel: xor: automatically using best checksumming function avx Apr 24 01:04:15.110614 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 01:04:15.125633 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 01:04:15.130874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 01:04:15.184419 systemd-udevd[453]: Using default interface naming scheme 'v255'. Apr 24 01:04:15.188750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 01:04:15.198702 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 01:04:15.254051 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Apr 24 01:04:15.336723 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 01:04:15.348697 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 01:04:15.424633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 01:04:15.442920 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 01:04:15.519728 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 24 01:04:15.564888 kernel: libata version 3.00 loaded. Apr 24 01:04:15.570725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 01:04:15.576497 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 01:04:15.570846 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 01:04:15.599140 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 01:04:15.620990 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 24 01:04:15.616588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 01:04:15.626675 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 01:04:15.640911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 01:04:15.640980 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 01:04:15.663819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 01:04:15.716980 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 01:04:15.717024 kernel: GPT:9289727 != 19775487 Apr 24 01:04:15.717094 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 01:04:15.722913 kernel: GPT:9289727 != 19775487 Apr 24 01:04:15.726406 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 01:04:15.735798 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 01:04:15.754394 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 01:04:15.754632 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 01:04:15.762562 kernel: AES CTR mode by8 optimization enabled Apr 24 01:04:15.779795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 01:04:15.851032 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 24 01:04:15.851323 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 24 01:04:15.851512 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 01:04:15.851587 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 24 01:04:15.851595 kernel: scsi host0: ahci Apr 24 01:04:15.838502 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 24 01:04:15.874329 kernel: scsi host1: ahci Apr 24 01:04:15.881378 kernel: scsi host2: ahci Apr 24 01:04:15.884355 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 24 01:04:15.892682 kernel: scsi host3: ahci Apr 24 01:04:15.892807 kernel: scsi host4: ahci Apr 24 01:04:15.906869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 24 01:04:15.960985 kernel: scsi host5: ahci Apr 24 01:04:15.961340 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Apr 24 01:04:15.961374 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Apr 24 01:04:15.961388 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Apr 24 01:04:15.961400 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Apr 24 01:04:15.961413 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Apr 24 01:04:15.961511 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Apr 24 01:04:15.929297 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 24 01:04:15.983973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 01:04:16.003099 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 01:04:16.047879 disk-uuid[645]: Primary Header is updated. Apr 24 01:04:16.047879 disk-uuid[645]: Secondary Entries is updated. Apr 24 01:04:16.047879 disk-uuid[645]: Secondary Header is updated. Apr 24 01:04:16.073490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 01:04:16.286356 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 01:04:16.293377 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 01:04:16.301319 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 01:04:16.309829 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 01:04:16.316579 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 01:04:16.325538 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 01:04:16.336568 kernel: ata3.00: LPM support broken, forcing max_power Apr 24 01:04:16.336596 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 01:04:16.336616 kernel: ata3.00: applying bridge limits Apr 24 01:04:16.341785 kernel: ata3.00: LPM support broken, forcing max_power Apr 24 01:04:16.351123 kernel: ata3.00: configured for UDMA/100 Apr 24 01:04:16.363383 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 01:04:16.423816 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 01:04:16.424049 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 01:04:16.442307 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 24 01:04:16.794984 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 01:04:16.804008 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 01:04:16.821306 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 01:04:16.830011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 01:04:16.838900 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 01:04:16.892725 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 01:04:17.096564 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 24 01:04:17.097981 disk-uuid[646]: The operation has completed successfully. Apr 24 01:04:17.134092 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 01:04:17.134737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 01:04:17.175328 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 01:04:17.213376 sh[674]: Success Apr 24 01:04:17.256815 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 01:04:17.256853 kernel: device-mapper: uevent: version 1.0.3 Apr 24 01:04:17.257390 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 24 01:04:17.293427 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 24 01:04:17.346683 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 01:04:17.365058 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 01:04:17.402421 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 01:04:17.462577 kernel: BTRFS: device fsid b0afcb9a-4dc6-42cc-b61f-b370046a03ca devid 1 transid 32 /dev/mapper/usr (253:0) scanned by mount (686) Apr 24 01:04:17.462603 kernel: BTRFS info (device dm-0): first mount of filesystem b0afcb9a-4dc6-42cc-b61f-b370046a03ca Apr 24 01:04:17.462610 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 01:04:17.495354 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 24 01:04:17.495415 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 24 01:04:17.497727 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 01:04:17.501021 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 24 01:04:17.516525 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 01:04:17.517520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 01:04:17.527892 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 01:04:17.616431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (716) Apr 24 01:04:17.627279 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 01:04:17.627306 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 01:04:17.652870 kernel: BTRFS info (device vda6): turning on async discard Apr 24 01:04:17.652898 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 01:04:17.670410 kernel: BTRFS info (device vda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 01:04:17.675808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 01:04:17.693364 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 01:04:17.840064 ignition[774]: Ignition 2.22.0 Apr 24 01:04:17.840326 ignition[774]: Stage: fetch-offline Apr 24 01:04:17.840348 ignition[774]: no configs at "/usr/lib/ignition/base.d" Apr 24 01:04:17.840354 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 01:04:17.840426 ignition[774]: parsed url from cmdline: "" Apr 24 01:04:17.840428 ignition[774]: no config URL provided Apr 24 01:04:17.840591 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 01:04:17.840600 ignition[774]: no config at "/usr/lib/ignition/user.ign" Apr 24 01:04:17.840627 ignition[774]: op(1): [started] loading QEMU firmware config module Apr 24 01:04:17.840637 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 24 01:04:17.906984 ignition[774]: op(1): [finished] loading QEMU firmware config module Apr 24 01:04:17.922708 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 01:04:17.943004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 01:04:18.016534 systemd-networkd[863]: lo: Link UP Apr 24 01:04:18.016601 systemd-networkd[863]: lo: Gained carrier Apr 24 01:04:18.027833 systemd-networkd[863]: Enumeration completed Apr 24 01:04:18.031432 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 01:04:18.040777 systemd[1]: Reached target network.target - Network. Apr 24 01:04:18.064333 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 01:04:18.064396 systemd-networkd[863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 01:04:18.090717 systemd-networkd[863]: eth0: Link UP Apr 24 01:04:18.090820 systemd-networkd[863]: eth0: Gained carrier Apr 24 01:04:18.090830 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 01:04:18.134340 systemd-networkd[863]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 01:04:18.737555 ignition[774]: parsing config with SHA512: 4f4ac0fdbbd7b8846db72b857fc119c68ec80ff6b29f2a079a8c71f39a961d5c1171c19d1980384a8139b113a32283f43cdf8646232e38427d94e0157d960352 Apr 24 01:04:18.747548 unknown[774]: fetched base config from "system" Apr 24 01:04:18.747557 unknown[774]: fetched user config from "qemu" Apr 24 01:04:18.748972 ignition[774]: fetch-offline: fetch-offline passed Apr 24 01:04:18.749028 ignition[774]: Ignition finished successfully Apr 24 01:04:18.774511 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 01:04:18.780650 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 24 01:04:18.781662 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 01:04:18.885707 ignition[868]: Ignition 2.22.0 Apr 24 01:04:18.885785 ignition[868]: Stage: kargs Apr 24 01:04:18.886545 ignition[868]: no configs at "/usr/lib/ignition/base.d" Apr 24 01:04:18.886556 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 01:04:18.889601 ignition[868]: kargs: kargs passed Apr 24 01:04:18.889641 ignition[868]: Ignition finished successfully Apr 24 01:04:18.922841 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 01:04:18.929843 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 01:04:18.994528 ignition[876]: Ignition 2.22.0 Apr 24 01:04:18.994596 ignition[876]: Stage: disks Apr 24 01:04:18.994698 ignition[876]: no configs at "/usr/lib/ignition/base.d" Apr 24 01:04:18.994704 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 01:04:18.995760 ignition[876]: disks: disks passed Apr 24 01:04:18.995792 ignition[876]: Ignition finished successfully Apr 24 01:04:19.030814 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 01:04:19.046356 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 01:04:19.052534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 01:04:19.074524 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 01:04:19.080353 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 01:04:19.096553 systemd[1]: Reached target basic.target - Basic System. Apr 24 01:04:19.111526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 01:04:19.177509 systemd-networkd[863]: eth0: Gained IPv6LL Apr 24 01:04:19.187565 systemd-fsck[885]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 24 01:04:19.198948 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 01:04:19.222136 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 01:04:19.514668 kernel: EXT4-fs (vda9): mounted filesystem 8c3ace63-1728-4b5e-a7b6-4ef650e41ba1 r/w with ordered data mode. Quota mode: none. Apr 24 01:04:19.515022 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 01:04:19.528692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 01:04:19.539520 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 01:04:19.567294 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 01:04:19.587292 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (893) Apr 24 01:04:19.573856 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 01:04:19.623836 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 01:04:19.623857 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 01:04:19.573888 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 01:04:19.573907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 01:04:19.624067 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 01:04:19.677797 kernel: BTRFS info (device vda6): turning on async discard Apr 24 01:04:19.677816 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 01:04:19.636632 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 01:04:19.679064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 01:04:19.743925 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 01:04:19.764974 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Apr 24 01:04:19.783840 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 01:04:19.793895 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 01:04:20.034533 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 01:04:20.045318 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 01:04:20.075618 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 01:04:20.091063 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 01:04:20.107606 kernel: BTRFS info (device vda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 01:04:20.142924 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 01:04:20.234057 ignition[1007]: INFO : Ignition 2.22.0 Apr 24 01:04:20.234057 ignition[1007]: INFO : Stage: mount Apr 24 01:04:20.244938 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 01:04:20.244938 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 01:04:20.244938 ignition[1007]: INFO : mount: mount passed Apr 24 01:04:20.244938 ignition[1007]: INFO : Ignition finished successfully Apr 24 01:04:20.275888 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 01:04:20.282969 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 01:04:20.517856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 01:04:20.564795 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Apr 24 01:04:20.581562 kernel: BTRFS info (device vda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 01:04:20.581595 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 01:04:20.603916 kernel: BTRFS info (device vda6): turning on async discard Apr 24 01:04:20.603973 kernel: BTRFS info (device vda6): enabling free space tree Apr 24 01:04:20.606765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 01:04:20.737357 ignition[1036]: INFO : Ignition 2.22.0 Apr 24 01:04:20.737357 ignition[1036]: INFO : Stage: files Apr 24 01:04:20.737357 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 01:04:20.737357 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 01:04:20.764769 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Apr 24 01:04:20.774239 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 01:04:20.774239 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 01:04:20.801118 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 01:04:20.811634 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 01:04:20.822812 unknown[1036]: wrote ssh authorized keys file for user: core Apr 24 01:04:20.829955 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 01:04:20.843226 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 01:04:20.843226 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 01:04:20.950020 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 01:04:21.042288 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 01:04:21.055767 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 24 01:04:21.458368 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 01:04:21.782979 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 24 01:04:21.782979 ignition[1036]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 01:04:21.807440 ignition[1036]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 24 01:04:21.944655 ignition[1036]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 01:04:21.963399 ignition[1036]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 24 01:04:21.974438 ignition[1036]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 24 01:04:21.974438 ignition[1036]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 24 01:04:21.974438 ignition[1036]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 01:04:21.974438 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 01:04:21.974438 ignition[1036]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 01:04:21.974438 ignition[1036]: INFO : files: files passed Apr 24 01:04:21.974438 ignition[1036]: INFO : Ignition finished successfully Apr 24 01:04:22.043055 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 01:04:22.059582 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 01:04:22.068433 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 01:04:22.111149 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 01:04:22.118577 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 01:04:22.132812 initrd-setup-root-after-ignition[1064]: grep: /sysroot/oem/oem-release: No such file or directory Apr 24 01:04:22.127874 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 01:04:22.158365 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 01:04:22.158365 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 01:04:22.152819 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 01:04:22.206002 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 01:04:22.164544 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 01:04:22.289431 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 01:04:22.289903 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 01:04:22.313307 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 01:04:22.327853 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 01:04:22.333411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 01:04:22.352907 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 01:04:22.407683 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 01:04:22.425886 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 01:04:22.471853 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 01:04:22.478712 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 01:04:22.493595 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 01:04:22.509576 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 01:04:22.509673 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 01:04:22.527047 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 01:04:22.536387 systemd[1]: Stopped target basic.target - Basic System. Apr 24 01:04:22.547766 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 01:04:22.565599 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 01:04:22.578723 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 01:04:22.593601 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 24 01:04:22.608715 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 01:04:22.623412 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 01:04:22.637719 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 01:04:22.652938 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 01:04:22.668524 systemd[1]: Stopped target swap.target - Swaps. Apr 24 01:04:22.682097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 01:04:22.682330 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 01:04:22.699052 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 01:04:22.707001 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 01:04:22.720784 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 01:04:22.723126 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 01:04:22.735544 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 01:04:22.735637 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 01:04:22.754636 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 01:04:22.754809 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 01:04:22.767020 systemd[1]: Stopped target paths.target - Path Units. Apr 24 01:04:22.781792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 01:04:22.784356 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 01:04:22.797026 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 01:04:22.811818 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 01:04:22.825129 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 01:04:22.825383 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 01:04:22.838578 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 01:04:22.838632 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 01:04:22.852659 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 01:04:22.852814 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 01:04:22.865537 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 01:04:22.865680 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 01:04:22.880102 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 01:04:22.977960 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 01:04:22.984855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 01:04:23.008919 ignition[1092]: INFO : Ignition 2.22.0 Apr 24 01:04:23.008919 ignition[1092]: INFO : Stage: umount Apr 24 01:04:23.008919 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 01:04:23.008919 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 24 01:04:22.984961 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 01:04:23.054124 ignition[1092]: INFO : umount: umount passed Apr 24 01:04:23.054124 ignition[1092]: INFO : Ignition finished successfully Apr 24 01:04:23.008949 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 01:04:23.009147 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 01:04:23.034913 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 01:04:23.036112 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 01:04:23.036426 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 01:04:23.048742 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 01:04:23.049052 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 01:04:23.066142 systemd[1]: Stopped target network.target - Network. Apr 24 01:04:23.076560 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 01:04:23.076611 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 01:04:23.088609 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 01:04:23.088644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 01:04:23.102042 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 01:04:23.102081 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 01:04:23.116531 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 01:04:23.116563 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 01:04:23.129618 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 01:04:23.143340 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 01:04:23.177089 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 01:04:23.177361 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 01:04:23.195926 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 24 01:04:23.196640 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 01:04:23.196697 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 01:04:23.304713 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 24 01:04:23.305004 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 01:04:23.305617 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 01:04:23.336107 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 24 01:04:23.336723 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 01:04:23.336790 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 01:04:23.347567 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 24 01:04:23.360036 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 01:04:23.360071 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 01:04:23.374983 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 01:04:23.375031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 01:04:23.390963 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 01:04:23.405126 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 01:04:23.405345 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 01:04:23.418450 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 01:04:23.418584 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 01:04:23.439453 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 01:04:23.439575 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 01:04:23.448291 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 01:04:23.463708 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 01:04:23.538981 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 01:04:23.539579 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 01:04:23.552694 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 01:04:23.552787 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 01:04:23.566088 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 01:04:23.566147 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 01:04:23.578002 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 01:04:23.578030 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 01:04:23.592948 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 01:04:23.592988 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 01:04:23.613751 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 01:04:23.613785 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 01:04:23.626741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 01:04:23.626781 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 01:04:23.644657 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 01:04:23.654358 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 24 01:04:23.654411 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 01:04:23.733031 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 01:04:23.733315 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 01:04:23.758574 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 01:04:23.758681 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 01:04:23.785801 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 01:04:23.785921 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 01:04:23.799841 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 01:04:23.799881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 01:04:23.833634 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 01:04:23.833833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 01:04:23.846912 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 01:04:23.861393 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 01:04:23.905788 systemd[1]: Switching root. Apr 24 01:04:23.946558 systemd-journald[203]: Journal stopped Apr 24 01:04:25.843001 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Apr 24 01:04:25.843051 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 01:04:25.843062 kernel: SELinux: policy capability open_perms=1 Apr 24 01:04:25.843070 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 01:04:25.843078 kernel: SELinux: policy capability always_check_network=0 Apr 24 01:04:25.843085 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 01:04:25.843094 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 01:04:25.843101 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 01:04:25.843109 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 01:04:25.843119 kernel: SELinux: policy capability userspace_initial_context=0 Apr 24 01:04:25.843126 kernel: audit: type=1403 audit(1776992664.139:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 01:04:25.843138 systemd[1]: Successfully loaded SELinux policy in 96.867ms. Apr 24 01:04:25.843292 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.785ms. Apr 24 01:04:25.843303 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 01:04:25.843314 systemd[1]: Detected virtualization kvm. Apr 24 01:04:25.843324 systemd[1]: Detected architecture x86-64. Apr 24 01:04:25.843331 systemd[1]: Detected first boot. Apr 24 01:04:25.843339 systemd[1]: Initializing machine ID from VM UUID. Apr 24 01:04:25.843347 zram_generator::config[1139]: No configuration found. Apr 24 01:04:25.843356 kernel: Guest personality initialized and is inactive Apr 24 01:04:25.843363 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 24 01:04:25.843370 kernel: Initialized host personality Apr 24 01:04:25.843377 kernel: NET: Registered PF_VSOCK protocol family Apr 24 01:04:25.843385 systemd[1]: Populated /etc with preset unit settings. Apr 24 01:04:25.843396 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 24 01:04:25.843404 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 01:04:25.843412 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 01:04:25.843420 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 01:04:25.843430 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 01:04:25.843438 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 01:04:25.843447 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 01:04:25.843455 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 01:04:25.843464 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 01:04:25.843537 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 01:04:25.843547 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 01:04:25.843554 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 01:04:25.843562 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 01:04:25.843570 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 01:04:25.843577 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 01:04:25.843585 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 01:04:25.843593 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 01:04:25.843603 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 01:04:25.843611 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 01:04:25.843619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 01:04:25.843626 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 01:04:25.843634 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 01:04:25.843642 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 01:04:25.843650 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 01:04:25.843658 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 01:04:25.843667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 01:04:25.843675 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 01:04:25.843684 systemd[1]: Reached target slices.target - Slice Units. Apr 24 01:04:25.843691 systemd[1]: Reached target swap.target - Swaps. Apr 24 01:04:25.843699 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 01:04:25.843707 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 01:04:25.843714 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 24 01:04:25.843722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 01:04:25.843730 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 01:04:25.843739 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 01:04:25.843746 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 01:04:25.843754 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 01:04:25.843761 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 01:04:25.843769 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 01:04:25.843776 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 01:04:25.843784 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 01:04:25.843791 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 01:04:25.843799 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 01:04:25.843808 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 01:04:25.843816 systemd[1]: Reached target machines.target - Containers. Apr 24 01:04:25.843823 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 01:04:25.843831 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 01:04:25.843839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 01:04:25.843847 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 01:04:25.843855 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 01:04:25.843863 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 01:04:25.843872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 01:04:25.843879 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 01:04:25.843887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 01:04:25.843895 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 01:04:25.843903 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 01:04:25.843910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 01:04:25.843918 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 01:04:25.843925 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 01:04:25.843934 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 01:04:25.843942 kernel: ACPI: bus type drm_connector registered Apr 24 01:04:25.843949 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 01:04:25.843956 kernel: fuse: init (API version 7.41) Apr 24 01:04:25.843963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 01:04:25.843970 kernel: loop: module loaded Apr 24 01:04:25.843978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 01:04:25.843986 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 01:04:25.844007 systemd-journald[1224]: Collecting audit messages is disabled. Apr 24 01:04:25.844026 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 24 01:04:25.844036 systemd-journald[1224]: Journal started Apr 24 01:04:25.844053 systemd-journald[1224]: Runtime Journal (/run/log/journal/b32972db32874aa99e81df45910694b0) is 6M, max 48.1M, 42.1M free. Apr 24 01:04:24.761887 systemd[1]: Queued start job for default target multi-user.target. Apr 24 01:04:24.776004 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 24 01:04:24.776724 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 01:04:24.777744 systemd[1]: systemd-journald.service: Consumed 3.650s CPU time. Apr 24 01:04:25.872101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 01:04:25.896001 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 01:04:25.896041 systemd[1]: Stopped verity-setup.service. Apr 24 01:04:25.896052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 01:04:25.919264 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 01:04:25.925741 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 01:04:25.933627 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 01:04:25.941832 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 01:04:25.949008 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 01:04:25.956890 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 01:04:25.965415 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 01:04:25.972813 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 01:04:25.982339 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 01:04:25.991583 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 01:04:25.991770 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 01:04:26.000930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 01:04:26.001356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 01:04:26.009962 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 01:04:26.010765 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 01:04:26.019806 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 01:04:26.020587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 01:04:26.029779 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 01:04:26.030103 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 01:04:26.038443 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 01:04:26.038774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 01:04:26.046869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 01:04:26.055272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 01:04:26.064943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 01:04:26.074582 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 24 01:04:26.084138 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 01:04:26.102624 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 01:04:26.111960 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 01:04:26.127688 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 01:04:26.135832 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 01:04:26.135856 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 01:04:26.144733 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 24 01:04:26.155747 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 01:04:26.163450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 01:04:26.165301 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 01:04:26.174553 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 01:04:26.183279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 01:04:26.184056 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 01:04:26.191802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 01:04:26.197596 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 01:04:26.211604 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 01:04:26.223671 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 01:04:26.229861 systemd-journald[1224]: Time spent on flushing to /var/log/journal/b32972db32874aa99e81df45910694b0 is 45.805ms for 1073 entries. Apr 24 01:04:26.229861 systemd-journald[1224]: System Journal (/var/log/journal/b32972db32874aa99e81df45910694b0) is 8M, max 195.6M, 187.6M free. Apr 24 01:04:26.337289 systemd-journald[1224]: Received client request to flush runtime journal. Apr 24 01:04:26.337321 kernel: loop0: detected capacity change from 0 to 128560 Apr 24 01:04:26.337331 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 01:04:26.246311 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 01:04:26.254884 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 01:04:26.264401 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 01:04:26.275659 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 01:04:26.288400 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 24 01:04:26.320824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 01:04:26.340277 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 01:04:26.344442 kernel: loop1: detected capacity change from 0 to 110984 Apr 24 01:04:26.367295 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 01:04:26.368407 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 24 01:04:26.375834 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 24 01:04:26.377109 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Apr 24 01:04:26.382769 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 01:04:26.394359 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 01:04:26.430365 kernel: loop2: detected capacity change from 0 to 219192 Apr 24 01:04:26.475719 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 01:04:26.485010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 01:04:26.504553 kernel: loop3: detected capacity change from 0 to 128560 Apr 24 01:04:26.527315 kernel: loop4: detected capacity change from 0 to 110984 Apr 24 01:04:26.538338 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 24 01:04:26.538351 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 24 01:04:26.540856 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 01:04:26.558275 kernel: loop5: detected capacity change from 0 to 219192 Apr 24 01:04:26.586942 (sd-merge)[1283]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 24 01:04:26.587400 (sd-merge)[1283]: Merged extensions into '/usr'. Apr 24 01:04:26.593273 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 01:04:26.593336 systemd[1]: Reloading... Apr 24 01:04:26.652389 zram_generator::config[1307]: No configuration found. Apr 24 01:04:26.778853 ldconfig[1254]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 01:04:26.843946 systemd[1]: Reloading finished in 250 ms. Apr 24 01:04:26.862770 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 01:04:26.871858 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 01:04:26.881144 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 01:04:26.920108 systemd[1]: Starting ensure-sysext.service... Apr 24 01:04:26.927563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 01:04:26.947689 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 01:04:26.963002 systemd[1]: Reload requested from client PID 1349 ('systemctl') (unit ensure-sysext.service)... Apr 24 01:04:26.963098 systemd[1]: Reloading... Apr 24 01:04:26.970851 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 24 01:04:26.970953 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 24 01:04:26.971115 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 01:04:26.971668 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 01:04:26.972309 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 01:04:26.972607 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Apr 24 01:04:26.972641 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Apr 24 01:04:26.976386 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 01:04:26.976399 systemd-tmpfiles[1350]: Skipping /boot Apr 24 01:04:26.982102 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 01:04:26.982394 systemd-tmpfiles[1350]: Skipping /boot Apr 24 01:04:26.995822 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Apr 24 01:04:27.021298 zram_generator::config[1373]: No configuration found. Apr 24 01:04:27.216387 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 01:04:27.216838 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 01:04:27.265299 kernel: ACPI: button: Power Button [PWRF] Apr 24 01:04:27.265358 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 24 01:04:27.281799 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 01:04:27.287803 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 01:04:27.281450 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 01:04:27.281782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 24 01:04:27.291905 systemd[1]: Reloading finished in 328 ms. Apr 24 01:04:27.303978 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 01:04:27.323370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 01:04:27.382395 systemd[1]: Finished ensure-sysext.service. Apr 24 01:04:27.410022 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 01:04:27.413692 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 01:04:27.440639 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 01:04:27.449776 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 01:04:27.630739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 01:04:27.639892 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 01:04:27.650350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 01:04:27.660614 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 01:04:27.668610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 01:04:27.671604 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 01:04:27.680541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 01:04:27.682336 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 01:04:27.693859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 01:04:27.713772 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 01:04:27.730725 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 01:04:27.742610 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 01:04:27.750703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 01:04:27.758374 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 01:04:27.759034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 01:04:27.759428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 01:04:27.766935 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 01:04:27.767275 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 01:04:27.776325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 01:04:27.776580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 01:04:27.786031 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 01:04:27.786687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 01:04:27.796910 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 01:04:27.809556 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 01:04:27.836092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 01:04:27.836593 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 01:04:27.838770 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 01:04:27.867723 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 01:04:27.868472 augenrules[1511]: No rules Apr 24 01:04:27.874816 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 01:04:27.875067 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 01:04:27.883047 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 01:04:27.890557 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 01:04:27.891945 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 01:04:27.906754 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 01:04:27.986956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 01:04:28.185336 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 01:04:28.267387 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 01:04:28.268882 systemd-networkd[1478]: lo: Link UP Apr 24 01:04:28.268886 systemd-networkd[1478]: lo: Gained carrier Apr 24 01:04:28.269897 systemd-networkd[1478]: Enumeration completed Apr 24 01:04:28.271423 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 01:04:28.271425 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 01:04:28.273091 systemd-networkd[1478]: eth0: Link UP Apr 24 01:04:28.273291 systemd-networkd[1478]: eth0: Gained carrier Apr 24 01:04:28.273302 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 01:04:28.276399 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 01:04:28.284455 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 01:04:28.292985 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 24 01:04:28.302441 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 01:04:28.316415 systemd-resolved[1484]: Positive Trust Anchors: Apr 24 01:04:28.316554 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 01:04:28.316579 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 01:04:28.320748 systemd-resolved[1484]: Defaulting to hostname 'linux'. Apr 24 01:04:28.322024 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 01:04:28.330673 systemd[1]: Reached target network.target - Network. Apr 24 01:04:28.331301 systemd-networkd[1478]: eth0: DHCPv4 address 10.0.0.5/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 24 01:04:28.332089 systemd-timesyncd[1485]: Network configuration changed, trying to establish connection. Apr 24 01:04:29.558501 systemd-resolved[1484]: Clock change detected. Flushing caches. Apr 24 01:04:29.558562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 01:04:29.558606 systemd-timesyncd[1485]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 24 01:04:29.558630 systemd-timesyncd[1485]: Initial clock synchronization to Fri 2026-04-24 01:04:29.558476 UTC. Apr 24 01:04:29.567708 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 01:04:29.576087 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 01:04:29.585675 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 01:04:29.595378 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 24 01:04:29.603984 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 01:04:29.611684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 01:04:29.621027 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 01:04:29.630448 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 01:04:29.630540 systemd[1]: Reached target paths.target - Path Units. Apr 24 01:04:29.637285 systemd[1]: Reached target timers.target - Timer Units. Apr 24 01:04:29.646356 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 01:04:29.655670 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 01:04:29.664790 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 24 01:04:29.674376 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 24 01:04:29.683803 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 24 01:04:29.694523 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 01:04:29.702485 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 24 01:04:29.712348 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 24 01:04:29.721750 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 01:04:29.732484 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 01:04:29.739652 systemd[1]: Reached target basic.target - Basic System. Apr 24 01:04:29.746677 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 01:04:29.746766 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 01:04:29.748369 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 01:04:29.766478 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 01:04:29.774565 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 01:04:29.784567 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 01:04:29.795004 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 01:04:29.798689 jq[1542]: false Apr 24 01:04:29.802591 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 01:04:29.803962 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 24 01:04:29.808098 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 01:04:29.822029 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 01:04:29.830805 extend-filesystems[1543]: Found /dev/vda6 Apr 24 01:04:29.835269 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 01:04:29.847268 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing passwd entry cache Apr 24 01:04:29.839268 oslogin_cache_refresh[1544]: Refreshing passwd entry cache Apr 24 01:04:29.847607 extend-filesystems[1543]: Found /dev/vda9 Apr 24 01:04:29.848387 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 01:04:29.853227 oslogin_cache_refresh[1544]: Failure getting users, quitting Apr 24 01:04:29.861439 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting users, quitting Apr 24 01:04:29.861439 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 01:04:29.861439 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Refreshing group entry cache Apr 24 01:04:29.861492 extend-filesystems[1543]: Checking size of /dev/vda9 Apr 24 01:04:29.853241 oslogin_cache_refresh[1544]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 01:04:29.865041 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 01:04:29.876556 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Failure getting groups, quitting Apr 24 01:04:29.876556 google_oslogin_nss_cache[1544]: oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 01:04:29.853282 oslogin_cache_refresh[1544]: Refreshing group entry cache Apr 24 01:04:29.868019 oslogin_cache_refresh[1544]: Failure getting groups, quitting Apr 24 01:04:29.868028 oslogin_cache_refresh[1544]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 01:04:29.877328 extend-filesystems[1543]: Resized partition /dev/vda9 Apr 24 01:04:29.884768 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Apr 24 01:04:29.913349 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 24 01:04:29.877795 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 01:04:29.878455 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 01:04:29.880266 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 01:04:29.905985 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 01:04:29.922467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 01:04:29.932624 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 01:04:29.933105 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 01:04:29.933429 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 24 01:04:29.933717 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 24 01:04:29.942706 jq[1569]: true Apr 24 01:04:29.942649 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 01:04:29.943352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 01:04:29.956485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 01:04:29.956710 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 01:04:29.984687 update_engine[1568]: I20260424 01:04:29.984512 1568 main.cc:92] Flatcar Update Engine starting Apr 24 01:04:29.990709 (ntainerd)[1574]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 01:04:29.992250 jq[1573]: true Apr 24 01:04:29.999042 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 24 01:04:30.006742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 01:04:30.023279 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 24 01:04:30.023279 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 24 01:04:30.023279 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 24 01:04:30.053304 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Apr 24 01:04:30.067965 tar[1572]: linux-amd64/LICENSE Apr 24 01:04:30.067965 tar[1572]: linux-amd64/helm Apr 24 01:04:30.023523 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 01:04:30.065337 dbus-daemon[1540]: [system] SELinux support is enabled Apr 24 01:04:30.023765 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 01:04:30.040274 systemd-logind[1560]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 01:04:30.040285 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 01:04:30.042442 systemd-logind[1560]: New seat seat0. Apr 24 01:04:30.043519 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 01:04:30.065613 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 01:04:30.080344 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 01:04:30.080447 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 01:04:30.083771 update_engine[1568]: I20260424 01:04:30.083653 1568 update_check_scheduler.cc:74] Next update check in 4m59s Apr 24 01:04:30.089574 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 01:04:30.089590 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 01:04:30.099959 dbus-daemon[1540]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 01:04:30.100111 systemd[1]: Started update-engine.service - Update Engine. Apr 24 01:04:30.110139 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 01:04:30.111142 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 01:04:30.132328 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Apr 24 01:04:30.134949 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 01:04:30.145019 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 24 01:04:30.159378 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 01:04:30.171233 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 01:04:30.179435 systemd[1]: Started sshd@0-10.0.0.5:22-10.0.0.1:43546.service - OpenSSH per-connection server daemon (10.0.0.1:43546). Apr 24 01:04:30.193570 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 01:04:30.215545 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 01:04:30.215708 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 01:04:30.226697 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 01:04:30.250421 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 01:04:30.263787 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 01:04:30.273271 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 01:04:30.281762 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 01:04:30.319234 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 43546 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:30.321636 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:30.324427 containerd[1574]: time="2026-04-24T01:04:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 24 01:04:30.324964 containerd[1574]: time="2026-04-24T01:04:30.324636471Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 24 01:04:30.333299 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 01:04:30.343140 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 01:04:30.354621 containerd[1574]: time="2026-04-24T01:04:30.354521022Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.153µs" Apr 24 01:04:30.354621 containerd[1574]: time="2026-04-24T01:04:30.354549351Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 24 01:04:30.354621 containerd[1574]: time="2026-04-24T01:04:30.354563401Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 24 01:04:30.354691 containerd[1574]: time="2026-04-24T01:04:30.354671480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 24 01:04:30.354691 containerd[1574]: time="2026-04-24T01:04:30.354685228Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 24 01:04:30.354716 containerd[1574]: time="2026-04-24T01:04:30.354704200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355259 containerd[1574]: time="2026-04-24T01:04:30.354740052Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355259 containerd[1574]: time="2026-04-24T01:04:30.354749738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355259 containerd[1574]: time="2026-04-24T01:04:30.355086258Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355259 containerd[1574]: time="2026-04-24T01:04:30.355097858Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355259 containerd[1574]: time="2026-04-24T01:04:30.355105303Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355259 containerd[1574]: time="2026-04-24T01:04:30.355110893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355259 containerd[1574]: time="2026-04-24T01:04:30.355262097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355500 containerd[1574]: time="2026-04-24T01:04:30.355396135Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355525 containerd[1574]: time="2026-04-24T01:04:30.355502154Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 01:04:30.355525 containerd[1574]: time="2026-04-24T01:04:30.355510820Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 24 01:04:30.355552 containerd[1574]: time="2026-04-24T01:04:30.355532850Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 24 01:04:30.356056 containerd[1574]: time="2026-04-24T01:04:30.355693182Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 24 01:04:30.356056 containerd[1574]: time="2026-04-24T01:04:30.355812208Z" level=info msg="metadata content store policy set" policy=shared Apr 24 01:04:30.365304 systemd-logind[1560]: New session 1 of user core. Apr 24 01:04:30.368733 containerd[1574]: time="2026-04-24T01:04:30.368628133Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.368807298Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369041903Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369052911Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369064659Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369074624Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369083073Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369090948Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369099732Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 24 01:04:30.369105 containerd[1574]: time="2026-04-24T01:04:30.369107484Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 24 01:04:30.369300 containerd[1574]: time="2026-04-24T01:04:30.369116855Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 24 01:04:30.369300 containerd[1574]: time="2026-04-24T01:04:30.369126342Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 24 01:04:30.369300 containerd[1574]: time="2026-04-24T01:04:30.369286749Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 24 01:04:30.369340 containerd[1574]: time="2026-04-24T01:04:30.369302233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 24 01:04:30.369340 containerd[1574]: time="2026-04-24T01:04:30.369318298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 24 01:04:30.369340 containerd[1574]: time="2026-04-24T01:04:30.369326234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 24 01:04:30.369340 containerd[1574]: time="2026-04-24T01:04:30.369333999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 24 01:04:30.369391 containerd[1574]: time="2026-04-24T01:04:30.369341611Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 24 01:04:30.369391 containerd[1574]: time="2026-04-24T01:04:30.369349809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 24 01:04:30.369391 containerd[1574]: time="2026-04-24T01:04:30.369357509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 24 01:04:30.369391 containerd[1574]: time="2026-04-24T01:04:30.369365608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 24 01:04:30.369391 containerd[1574]: time="2026-04-24T01:04:30.369375922Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 24 01:04:30.369391 containerd[1574]: time="2026-04-24T01:04:30.369383463Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 24 01:04:30.369471 containerd[1574]: time="2026-04-24T01:04:30.369420287Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 24 01:04:30.369471 containerd[1574]: time="2026-04-24T01:04:30.369430043Z" level=info msg="Start snapshots syncer" Apr 24 01:04:30.369713 containerd[1574]: time="2026-04-24T01:04:30.369530087Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 24 01:04:30.369984 containerd[1574]: time="2026-04-24T01:04:30.369718316Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 24 01:04:30.369984 containerd[1574]: time="2026-04-24T01:04:30.369954080Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 24 01:04:30.371649 containerd[1574]: time="2026-04-24T01:04:30.371561712Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 24 01:04:30.371687 containerd[1574]: time="2026-04-24T01:04:30.371649278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 24 01:04:30.371687 containerd[1574]: time="2026-04-24T01:04:30.371667539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 24 01:04:30.371687 containerd[1574]: time="2026-04-24T01:04:30.371676050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 24 01:04:30.371687 containerd[1574]: time="2026-04-24T01:04:30.371684996Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 24 01:04:30.371740 containerd[1574]: time="2026-04-24T01:04:30.371695802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 24 01:04:30.371740 containerd[1574]: time="2026-04-24T01:04:30.371703879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 24 01:04:30.371740 containerd[1574]: time="2026-04-24T01:04:30.371712478Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 24 01:04:30.371740 containerd[1574]: time="2026-04-24T01:04:30.371730597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 24 01:04:30.371740 containerd[1574]: time="2026-04-24T01:04:30.371739145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 24 01:04:30.371806 containerd[1574]: time="2026-04-24T01:04:30.371747329Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372024505Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372040448Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372047113Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372053728Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372059259Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372065937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372077678Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372089293Z" level=info msg="runtime interface created" Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372093370Z" level=info msg="created NRI interface" Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372098510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372107297Z" level=info msg="Connect containerd service" Apr 24 01:04:30.372273 containerd[1574]: time="2026-04-24T01:04:30.372122103Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 01:04:30.373621 containerd[1574]: time="2026-04-24T01:04:30.373475474Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 01:04:30.381705 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 01:04:30.394631 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 01:04:30.416817 (systemd)[1642]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 01:04:30.421354 systemd-logind[1560]: New session c1 of user core. Apr 24 01:04:30.453777 tar[1572]: linux-amd64/README.md Apr 24 01:04:30.474810 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 01:04:30.477276 containerd[1574]: time="2026-04-24T01:04:30.477250839Z" level=info msg="Start subscribing containerd event" Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477349602Z" level=info msg="Start recovering state" Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477419928Z" level=info msg="Start event monitor" Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477430092Z" level=info msg="Start cni network conf syncer for default" Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477435418Z" level=info msg="Start streaming server" Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477448208Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477453887Z" level=info msg="runtime interface starting up..." Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477458304Z" level=info msg="starting plugins..." Apr 24 01:04:30.477815 containerd[1574]: time="2026-04-24T01:04:30.477465898Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 24 01:04:30.478248 containerd[1574]: time="2026-04-24T01:04:30.478141924Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 01:04:30.478314 containerd[1574]: time="2026-04-24T01:04:30.478306355Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 01:04:30.482696 containerd[1574]: time="2026-04-24T01:04:30.482679514Z" level=info msg="containerd successfully booted in 0.158685s" Apr 24 01:04:30.483414 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 01:04:30.558135 systemd[1642]: Queued start job for default target default.target. Apr 24 01:04:30.569005 systemd[1642]: Created slice app.slice - User Application Slice. Apr 24 01:04:30.569029 systemd[1642]: Reached target paths.target - Paths. Apr 24 01:04:30.569138 systemd[1642]: Reached target timers.target - Timers. Apr 24 01:04:30.570727 systemd[1642]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 01:04:30.587626 systemd[1642]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 01:04:30.587796 systemd[1642]: Reached target sockets.target - Sockets. Apr 24 01:04:30.588085 systemd[1642]: Reached target basic.target - Basic System. Apr 24 01:04:30.588237 systemd[1642]: Reached target default.target - Main User Target. Apr 24 01:04:30.588256 systemd[1642]: Startup finished in 156ms. Apr 24 01:04:30.588299 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 01:04:30.597498 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 01:04:30.620342 systemd[1]: Started sshd@1-10.0.0.5:22-10.0.0.1:43556.service - OpenSSH per-connection server daemon (10.0.0.1:43556). Apr 24 01:04:30.677954 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 43556 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:30.679405 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:30.688625 systemd-logind[1560]: New session 2 of user core. Apr 24 01:04:30.698350 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 01:04:30.707106 systemd-networkd[1478]: eth0: Gained IPv6LL Apr 24 01:04:30.709385 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 01:04:30.720353 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 01:04:30.730083 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 24 01:04:30.752964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 01:04:30.762640 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 01:04:30.789007 sshd[1680]: Connection closed by 10.0.0.1 port 43556 Apr 24 01:04:30.789392 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Apr 24 01:04:30.803067 systemd[1]: sshd@1-10.0.0.5:22-10.0.0.1:43556.service: Deactivated successfully. Apr 24 01:04:30.804578 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 24 01:04:30.805041 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 24 01:04:30.813641 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 01:04:30.822432 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 01:04:30.824071 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Apr 24 01:04:30.826305 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 01:04:30.827732 systemd[1]: Started sshd@2-10.0.0.5:22-10.0.0.1:43562.service - OpenSSH per-connection server daemon (10.0.0.1:43562). Apr 24 01:04:30.839662 systemd-logind[1560]: Removed session 2. Apr 24 01:04:30.901061 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 43562 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:30.902516 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:30.909547 systemd-logind[1560]: New session 3 of user core. Apr 24 01:04:30.920279 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 01:04:30.947719 sshd[1700]: Connection closed by 10.0.0.1 port 43562 Apr 24 01:04:30.948734 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Apr 24 01:04:30.952329 systemd[1]: sshd@2-10.0.0.5:22-10.0.0.1:43562.service: Deactivated successfully. Apr 24 01:04:30.954058 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 01:04:30.955451 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Apr 24 01:04:30.958126 systemd-logind[1560]: Removed session 3. Apr 24 01:04:31.810733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:04:31.820113 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 01:04:31.828611 systemd[1]: Startup finished in 5.383s (kernel) + 11.151s (initrd) + 6.564s (userspace) = 23.099s. Apr 24 01:04:31.837385 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 01:04:32.438991 kubelet[1710]: E0424 01:04:32.438726 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 01:04:32.441239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 01:04:32.441404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 01:04:32.441956 systemd[1]: kubelet.service: Consumed 991ms CPU time, 257.6M memory peak. Apr 24 01:04:40.961362 systemd[1]: Started sshd@3-10.0.0.5:22-10.0.0.1:60574.service - OpenSSH per-connection server daemon (10.0.0.1:60574). Apr 24 01:04:41.020684 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 60574 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:41.022126 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:41.027568 systemd-logind[1560]: New session 4 of user core. Apr 24 01:04:41.037102 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 01:04:41.051962 sshd[1726]: Connection closed by 10.0.0.1 port 60574 Apr 24 01:04:41.052468 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Apr 24 01:04:41.059595 systemd[1]: sshd@3-10.0.0.5:22-10.0.0.1:60574.service: Deactivated successfully. Apr 24 01:04:41.062084 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 01:04:41.063317 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Apr 24 01:04:41.065299 systemd[1]: Started sshd@4-10.0.0.5:22-10.0.0.1:60576.service - OpenSSH per-connection server daemon (10.0.0.1:60576). Apr 24 01:04:41.066554 systemd-logind[1560]: Removed session 4. Apr 24 01:04:41.123797 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 60576 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:41.125252 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:41.132301 systemd-logind[1560]: New session 5 of user core. Apr 24 01:04:41.142261 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 01:04:41.151306 sshd[1736]: Connection closed by 10.0.0.1 port 60576 Apr 24 01:04:41.151500 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Apr 24 01:04:41.167337 systemd[1]: sshd@4-10.0.0.5:22-10.0.0.1:60576.service: Deactivated successfully. Apr 24 01:04:41.168804 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 01:04:41.169962 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Apr 24 01:04:41.172315 systemd[1]: Started sshd@5-10.0.0.5:22-10.0.0.1:60580.service - OpenSSH per-connection server daemon (10.0.0.1:60580). Apr 24 01:04:41.173664 systemd-logind[1560]: Removed session 5. Apr 24 01:04:41.232763 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 60580 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:41.234514 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:41.239965 systemd-logind[1560]: New session 6 of user core. Apr 24 01:04:41.247396 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 01:04:41.265306 sshd[1745]: Connection closed by 10.0.0.1 port 60580 Apr 24 01:04:41.265688 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Apr 24 01:04:41.278565 systemd[1]: sshd@5-10.0.0.5:22-10.0.0.1:60580.service: Deactivated successfully. Apr 24 01:04:41.280544 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 01:04:41.282129 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Apr 24 01:04:41.284780 systemd[1]: Started sshd@6-10.0.0.5:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). Apr 24 01:04:41.286513 systemd-logind[1560]: Removed session 6. Apr 24 01:04:41.344143 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:41.345394 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:41.350606 systemd-logind[1560]: New session 7 of user core. Apr 24 01:04:41.364281 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 01:04:41.387634 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 01:04:41.388024 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 01:04:41.410769 sudo[1756]: pam_unix(sudo:session): session closed for user root Apr 24 01:04:41.413094 sshd[1755]: Connection closed by 10.0.0.1 port 60594 Apr 24 01:04:41.413370 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Apr 24 01:04:41.426655 systemd[1]: sshd@6-10.0.0.5:22-10.0.0.1:60594.service: Deactivated successfully. Apr 24 01:04:41.428304 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 01:04:41.429245 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Apr 24 01:04:41.431563 systemd[1]: Started sshd@7-10.0.0.5:22-10.0.0.1:60600.service - OpenSSH per-connection server daemon (10.0.0.1:60600). Apr 24 01:04:41.432978 systemd-logind[1560]: Removed session 7. Apr 24 01:04:41.498757 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 60600 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:41.499955 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:41.505459 systemd-logind[1560]: New session 8 of user core. Apr 24 01:04:41.516102 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 01:04:41.532800 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 01:04:41.533312 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 01:04:41.539808 sudo[1767]: pam_unix(sudo:session): session closed for user root Apr 24 01:04:41.545976 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 24 01:04:41.546383 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 01:04:41.558932 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 01:04:41.613445 augenrules[1789]: No rules Apr 24 01:04:41.614565 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 01:04:41.614799 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 01:04:41.615924 sudo[1766]: pam_unix(sudo:session): session closed for user root Apr 24 01:04:41.617767 sshd[1765]: Connection closed by 10.0.0.1 port 60600 Apr 24 01:04:41.617974 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Apr 24 01:04:41.624557 systemd[1]: sshd@7-10.0.0.5:22-10.0.0.1:60600.service: Deactivated successfully. Apr 24 01:04:41.625815 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 01:04:41.627014 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Apr 24 01:04:41.628601 systemd[1]: Started sshd@8-10.0.0.5:22-10.0.0.1:60610.service - OpenSSH per-connection server daemon (10.0.0.1:60610). Apr 24 01:04:41.629635 systemd-logind[1560]: Removed session 8. Apr 24 01:04:41.685167 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 60610 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:04:41.686092 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:04:41.691134 systemd-logind[1560]: New session 9 of user core. Apr 24 01:04:41.703150 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 01:04:41.715675 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 01:04:41.715985 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 01:04:42.025957 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 01:04:42.045138 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 01:04:42.682654 dockerd[1822]: time="2026-04-24T01:04:42.681536150Z" level=info msg="Starting up" Apr 24 01:04:42.684371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 01:04:42.685543 dockerd[1822]: time="2026-04-24T01:04:42.685420219Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 24 01:04:42.685906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 01:04:42.768750 dockerd[1822]: time="2026-04-24T01:04:42.768524143Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 24 01:04:42.887439 systemd[1]: var-lib-docker-metacopy\x2dcheck2110752947-merged.mount: Deactivated successfully. Apr 24 01:04:42.928022 dockerd[1822]: time="2026-04-24T01:04:42.927956419Z" level=info msg="Loading containers: start." Apr 24 01:04:42.933361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:04:42.940965 kernel: Initializing XFRM netlink socket Apr 24 01:04:42.946414 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 01:04:43.077912 kubelet[1856]: E0424 01:04:43.077359 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 01:04:43.080633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 01:04:43.080770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 01:04:43.081270 systemd[1]: kubelet.service: Consumed 303ms CPU time, 110.2M memory peak. Apr 24 01:04:43.393253 systemd-networkd[1478]: docker0: Link UP Apr 24 01:04:43.401166 dockerd[1822]: time="2026-04-24T01:04:43.401051019Z" level=info msg="Loading containers: done." Apr 24 01:04:43.452676 dockerd[1822]: time="2026-04-24T01:04:43.452517603Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 01:04:43.452996 dockerd[1822]: time="2026-04-24T01:04:43.452937869Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 24 01:04:43.453133 dockerd[1822]: time="2026-04-24T01:04:43.453080831Z" level=info msg="Initializing buildkit" Apr 24 01:04:43.505011 dockerd[1822]: time="2026-04-24T01:04:43.504647101Z" level=info msg="Completed buildkit initialization" Apr 24 01:04:43.513370 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 01:04:43.514412 dockerd[1822]: time="2026-04-24T01:04:43.512747721Z" level=info msg="Daemon has completed initialization" Apr 24 01:04:43.515134 dockerd[1822]: time="2026-04-24T01:04:43.515059342Z" level=info msg="API listen on /run/docker.sock" Apr 24 01:04:44.427929 containerd[1574]: time="2026-04-24T01:04:44.427605284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 24 01:04:45.216170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174429863.mount: Deactivated successfully. Apr 24 01:04:48.908632 containerd[1574]: time="2026-04-24T01:04:48.908495436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:48.909437 containerd[1574]: time="2026-04-24T01:04:48.909306481Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27099952" Apr 24 01:04:48.911603 containerd[1574]: time="2026-04-24T01:04:48.911435598Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:48.916156 containerd[1574]: time="2026-04-24T01:04:48.916029192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:48.918347 containerd[1574]: time="2026-04-24T01:04:48.918125693Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 4.490424681s" Apr 24 01:04:48.918347 containerd[1574]: time="2026-04-24T01:04:48.918343553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 24 01:04:48.929646 containerd[1574]: time="2026-04-24T01:04:48.929442672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 24 01:04:51.499088 containerd[1574]: time="2026-04-24T01:04:51.498769669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:51.499951 containerd[1574]: time="2026-04-24T01:04:51.499890251Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252670" Apr 24 01:04:51.501687 containerd[1574]: time="2026-04-24T01:04:51.501626355Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:51.509990 containerd[1574]: time="2026-04-24T01:04:51.509949774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:51.511050 containerd[1574]: time="2026-04-24T01:04:51.510971848Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 2.581464525s" Apr 24 01:04:51.511050 containerd[1574]: time="2026-04-24T01:04:51.511025138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 24 01:04:51.514052 containerd[1574]: time="2026-04-24T01:04:51.514028441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 24 01:04:53.239490 containerd[1574]: time="2026-04-24T01:04:53.239288487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:53.240589 containerd[1574]: time="2026-04-24T01:04:53.240474948Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810823" Apr 24 01:04:53.242408 containerd[1574]: time="2026-04-24T01:04:53.242313183Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:53.246923 containerd[1574]: time="2026-04-24T01:04:53.246713270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:53.248665 containerd[1574]: time="2026-04-24T01:04:53.248613915Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.734472741s" Apr 24 01:04:53.248701 containerd[1574]: time="2026-04-24T01:04:53.248667673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 24 01:04:53.250906 containerd[1574]: time="2026-04-24T01:04:53.250700097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 24 01:04:53.334213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 01:04:53.336535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 01:04:53.544058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:04:53.562541 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 01:04:54.078353 kubelet[2132]: E0424 01:04:54.078164 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 01:04:54.080693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 01:04:54.080982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 01:04:54.081625 systemd[1]: kubelet.service: Consumed 694ms CPU time, 110.8M memory peak. Apr 24 01:04:54.756113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842621474.mount: Deactivated successfully. Apr 24 01:04:55.842167 containerd[1574]: time="2026-04-24T01:04:55.842043771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:55.842928 containerd[1574]: time="2026-04-24T01:04:55.842894763Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972848" Apr 24 01:04:55.844542 containerd[1574]: time="2026-04-24T01:04:55.844384494Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:55.847024 containerd[1574]: time="2026-04-24T01:04:55.846940721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:55.847683 containerd[1574]: time="2026-04-24T01:04:55.847618591Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 2.596862059s" Apr 24 01:04:55.847683 containerd[1574]: time="2026-04-24T01:04:55.847677374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 24 01:04:55.850072 containerd[1574]: time="2026-04-24T01:04:55.850015465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 24 01:04:56.285446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586700217.mount: Deactivated successfully. Apr 24 01:04:57.165078 containerd[1574]: time="2026-04-24T01:04:57.164976852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:57.165485 containerd[1574]: time="2026-04-24T01:04:57.165459512Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22387483" Apr 24 01:04:57.166797 containerd[1574]: time="2026-04-24T01:04:57.166742512Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:57.170302 containerd[1574]: time="2026-04-24T01:04:57.170191341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:57.171180 containerd[1574]: time="2026-04-24T01:04:57.171088497Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.321013082s" Apr 24 01:04:57.171180 containerd[1574]: time="2026-04-24T01:04:57.171147694Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 24 01:04:57.172563 containerd[1574]: time="2026-04-24T01:04:57.172502587Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 01:04:57.586470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813092553.mount: Deactivated successfully. Apr 24 01:04:57.595171 containerd[1574]: time="2026-04-24T01:04:57.595011753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:57.596490 containerd[1574]: time="2026-04-24T01:04:57.596407242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 24 01:04:57.598400 containerd[1574]: time="2026-04-24T01:04:57.598218695Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:57.600750 containerd[1574]: time="2026-04-24T01:04:57.600638318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:57.601444 containerd[1574]: time="2026-04-24T01:04:57.601380028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 428.854446ms" Apr 24 01:04:57.601477 containerd[1574]: time="2026-04-24T01:04:57.601445233Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 01:04:57.602780 containerd[1574]: time="2026-04-24T01:04:57.602709062Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 24 01:04:58.065569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1835174898.mount: Deactivated successfully. Apr 24 01:04:58.994084 containerd[1574]: time="2026-04-24T01:04:58.993743446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:58.995365 containerd[1574]: time="2026-04-24T01:04:58.995337646Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874255" Apr 24 01:04:58.996718 containerd[1574]: time="2026-04-24T01:04:58.996658665Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:58.999226 containerd[1574]: time="2026-04-24T01:04:58.999164622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:04:59.001803 containerd[1574]: time="2026-04-24T01:04:59.001723764Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.398951426s" Apr 24 01:04:59.001803 containerd[1574]: time="2026-04-24T01:04:59.001787250Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 24 01:05:01.991237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:05:01.991422 systemd[1]: kubelet.service: Consumed 694ms CPU time, 110.8M memory peak. Apr 24 01:05:01.993480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 01:05:02.018805 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-9.scope)... Apr 24 01:05:02.018968 systemd[1]: Reloading... Apr 24 01:05:02.116021 zram_generator::config[2341]: No configuration found. Apr 24 01:05:02.276567 systemd[1]: Reloading finished in 257 ms. Apr 24 01:05:02.345240 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 01:05:02.345397 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 01:05:02.345702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:05:02.345774 systemd[1]: kubelet.service: Consumed 99ms CPU time, 98.4M memory peak. Apr 24 01:05:02.347257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 01:05:02.482918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:05:02.494111 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 01:05:02.565484 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 01:05:02.565484 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 01:05:02.565484 kubelet[2389]: I0424 01:05:02.565463 2389 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 01:05:03.029657 kubelet[2389]: I0424 01:05:03.029587 2389 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 01:05:03.029657 kubelet[2389]: I0424 01:05:03.029644 2389 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 01:05:03.029657 kubelet[2389]: I0424 01:05:03.029662 2389 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 01:05:03.029657 kubelet[2389]: I0424 01:05:03.029670 2389 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 01:05:03.029992 kubelet[2389]: I0424 01:05:03.029944 2389 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 01:05:03.073732 kubelet[2389]: I0424 01:05:03.073543 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 01:05:03.073732 kubelet[2389]: E0424 01:05:03.073547 2389 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 01:05:03.079987 kubelet[2389]: I0424 01:05:03.079907 2389 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 01:05:03.087929 kubelet[2389]: I0424 01:05:03.087331 2389 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 01:05:03.088635 kubelet[2389]: I0424 01:05:03.088575 2389 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 01:05:03.089946 kubelet[2389]: I0424 01:05:03.088636 2389 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 01:05:03.089946 kubelet[2389]: I0424 01:05:03.089930 2389 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 01:05:03.089946 kubelet[2389]: I0424 01:05:03.089938 2389 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 01:05:03.090156 kubelet[2389]: I0424 01:05:03.090006 2389 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 01:05:03.093182 kubelet[2389]: I0424 01:05:03.093010 2389 state_mem.go:36] "Initialized new in-memory state store" Apr 24 01:05:03.093248 kubelet[2389]: I0424 01:05:03.093221 2389 kubelet.go:475] "Attempting to sync node with API server" Apr 24 01:05:03.093248 kubelet[2389]: I0424 01:05:03.093231 2389 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 01:05:03.093330 kubelet[2389]: I0424 01:05:03.093249 2389 kubelet.go:387] "Adding apiserver pod source" Apr 24 01:05:03.093330 kubelet[2389]: I0424 01:05:03.093302 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 01:05:03.094072 kubelet[2389]: E0424 01:05:03.093978 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 01:05:03.094072 kubelet[2389]: E0424 01:05:03.094014 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 01:05:03.094950 kubelet[2389]: I0424 01:05:03.094926 2389 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 01:05:03.095735 kubelet[2389]: I0424 01:05:03.095384 2389 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 01:05:03.095735 kubelet[2389]: I0424 01:05:03.095442 2389 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 01:05:03.095735 kubelet[2389]: W0424 01:05:03.095474 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 01:05:03.101299 kubelet[2389]: I0424 01:05:03.101200 2389 server.go:1262] "Started kubelet" Apr 24 01:05:03.102472 kubelet[2389]: I0424 01:05:03.102373 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 01:05:03.104627 kubelet[2389]: I0424 01:05:03.104168 2389 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 01:05:03.104627 kubelet[2389]: I0424 01:05:03.104230 2389 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 01:05:03.104627 kubelet[2389]: I0424 01:05:03.104489 2389 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 01:05:03.104627 kubelet[2389]: I0424 01:05:03.104524 2389 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 01:05:03.105384 kubelet[2389]: E0424 01:05:03.104016 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.5:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.5:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a92572d4ce0a6c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-24 01:05:03.10114366 +0000 UTC m=+0.602185015,LastTimestamp:2026-04-24 01:05:03.10114366 +0000 UTC m=+0.602185015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 24 01:05:03.105518 kubelet[2389]: I0424 01:05:03.105485 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 01:05:03.108249 kubelet[2389]: E0424 01:05:03.108122 2389 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 24 01:05:03.108956 kubelet[2389]: I0424 01:05:03.108323 2389 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 01:05:03.109404 kubelet[2389]: I0424 01:05:03.108331 2389 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 01:05:03.109543 kubelet[2389]: I0424 01:05:03.109536 2389 reconciler.go:29] "Reconciler: start to sync state" Apr 24 01:05:03.109667 kubelet[2389]: I0424 01:05:03.109655 2389 server.go:310] "Adding debug handlers to kubelet server" Apr 24 01:05:03.110449 kubelet[2389]: I0424 01:05:03.110244 2389 factory.go:223] Registration of the systemd container factory successfully Apr 24 01:05:03.110659 kubelet[2389]: I0424 01:05:03.110554 2389 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 01:05:03.110992 kubelet[2389]: E0424 01:05:03.110755 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="200ms" Apr 24 01:05:03.113055 kubelet[2389]: E0424 01:05:03.112988 2389 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 01:05:03.114194 kubelet[2389]: E0424 01:05:03.109715 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 01:05:03.117711 kubelet[2389]: I0424 01:05:03.117605 2389 factory.go:223] Registration of the containerd container factory successfully Apr 24 01:05:03.135172 kubelet[2389]: I0424 01:05:03.135125 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 01:05:03.135172 kubelet[2389]: I0424 01:05:03.135171 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 01:05:03.135239 kubelet[2389]: I0424 01:05:03.135182 2389 state_mem.go:36] "Initialized new in-memory state store" Apr 24 01:05:03.139146 kubelet[2389]: I0424 01:05:03.139067 2389 policy_none.go:49] "None policy: Start" Apr 24 01:05:03.139146 kubelet[2389]: I0424 01:05:03.139112 2389 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 01:05:03.139325 kubelet[2389]: I0424 01:05:03.139155 2389 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 01:05:03.141194 kubelet[2389]: I0424 01:05:03.140574 2389 policy_none.go:47] "Start" Apr 24 01:05:03.143404 kubelet[2389]: I0424 01:05:03.143373 2389 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 01:05:03.147016 kubelet[2389]: I0424 01:05:03.146694 2389 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 01:05:03.147016 kubelet[2389]: I0424 01:05:03.146741 2389 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 01:05:03.147016 kubelet[2389]: I0424 01:05:03.146754 2389 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 01:05:03.147016 kubelet[2389]: E0424 01:05:03.146780 2389 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 01:05:03.147439 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 01:05:03.147619 kubelet[2389]: E0424 01:05:03.147542 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 01:05:03.157165 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 01:05:03.160423 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 01:05:03.171125 kubelet[2389]: E0424 01:05:03.171012 2389 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 01:05:03.171580 kubelet[2389]: I0424 01:05:03.171301 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 01:05:03.171580 kubelet[2389]: I0424 01:05:03.171436 2389 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 01:05:03.172661 kubelet[2389]: I0424 01:05:03.172123 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 01:05:03.174955 kubelet[2389]: E0424 01:05:03.174807 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 01:05:03.174994 kubelet[2389]: E0424 01:05:03.174967 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 24 01:05:03.263762 systemd[1]: Created slice kubepods-burstable-pod29096e223a27687eeb13699b492ec48c.slice - libcontainer container kubepods-burstable-pod29096e223a27687eeb13699b492ec48c.slice. Apr 24 01:05:03.273973 kubelet[2389]: I0424 01:05:03.273770 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 01:05:03.274651 kubelet[2389]: E0424 01:05:03.274441 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 24 01:05:03.281432 kubelet[2389]: E0424 01:05:03.281235 2389 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 01:05:03.285407 systemd[1]: Created slice kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice - libcontainer container kubepods-burstable-podc6bb8708a026256e82ca4c5631a78b5a.slice. Apr 24 01:05:03.287354 kubelet[2389]: E0424 01:05:03.287245 2389 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 01:05:03.296893 systemd[1]: Created slice kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice - libcontainer container kubepods-burstable-pod824fd89300514e351ed3b68d82c665c6.slice. Apr 24 01:05:03.298967 kubelet[2389]: E0424 01:05:03.298725 2389 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 01:05:03.312002 kubelet[2389]: E0424 01:05:03.311954 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="400ms" Apr 24 01:05:03.410767 kubelet[2389]: I0424 01:05:03.410700 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:03.410767 kubelet[2389]: I0424 01:05:03.410766 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29096e223a27687eeb13699b492ec48c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"29096e223a27687eeb13699b492ec48c\") " pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:03.410960 kubelet[2389]: I0424 01:05:03.410782 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:03.410960 kubelet[2389]: I0424 01:05:03.410797 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:03.411151 kubelet[2389]: I0424 01:05:03.410812 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29096e223a27687eeb13699b492ec48c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"29096e223a27687eeb13699b492ec48c\") " pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:03.411151 kubelet[2389]: I0424 01:05:03.411143 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29096e223a27687eeb13699b492ec48c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"29096e223a27687eeb13699b492ec48c\") " pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:03.411192 kubelet[2389]: I0424 01:05:03.411158 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:03.411370 kubelet[2389]: I0424 01:05:03.411170 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:03.411370 kubelet[2389]: I0424 01:05:03.411367 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:03.476999 kubelet[2389]: I0424 01:05:03.476959 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 01:05:03.477389 kubelet[2389]: E0424 01:05:03.477339 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 24 01:05:03.586726 kubelet[2389]: E0424 01:05:03.586312 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:03.589088 containerd[1574]: time="2026-04-24T01:05:03.589035898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:29096e223a27687eeb13699b492ec48c,Namespace:kube-system,Attempt:0,}" Apr 24 01:05:03.591588 kubelet[2389]: E0424 01:05:03.591519 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:03.592279 containerd[1574]: time="2026-04-24T01:05:03.592180674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,}" Apr 24 01:05:03.602511 kubelet[2389]: E0424 01:05:03.602184 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:03.603232 containerd[1574]: time="2026-04-24T01:05:03.602719641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,}" Apr 24 01:05:03.712915 kubelet[2389]: E0424 01:05:03.712590 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="800ms" Apr 24 01:05:03.880396 kubelet[2389]: I0424 01:05:03.880334 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 01:05:03.880715 kubelet[2389]: E0424 01:05:03.880597 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.5:6443/api/v1/nodes\": dial tcp 10.0.0.5:6443: connect: connection refused" node="localhost" Apr 24 01:05:03.965526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213810370.mount: Deactivated successfully. Apr 24 01:05:03.971129 containerd[1574]: time="2026-04-24T01:05:03.970991142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 01:05:03.974049 containerd[1574]: time="2026-04-24T01:05:03.974020669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 24 01:05:03.975658 containerd[1574]: time="2026-04-24T01:05:03.975561041Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 01:05:03.978105 containerd[1574]: time="2026-04-24T01:05:03.977948394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 01:05:03.979447 containerd[1574]: time="2026-04-24T01:05:03.979373598Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 01:05:03.981000 containerd[1574]: time="2026-04-24T01:05:03.980668948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 01:05:03.981721 containerd[1574]: time="2026-04-24T01:05:03.981545557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 01:05:03.982400 containerd[1574]: time="2026-04-24T01:05:03.982314104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 01:05:03.984788 containerd[1574]: time="2026-04-24T01:05:03.984651545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 389.17053ms" Apr 24 01:05:03.987571 containerd[1574]: time="2026-04-24T01:05:03.987518461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 395.799403ms" Apr 24 01:05:03.988117 containerd[1574]: time="2026-04-24T01:05:03.988065171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 383.335645ms" Apr 24 01:05:04.020504 containerd[1574]: time="2026-04-24T01:05:04.020047640Z" level=info msg="connecting to shim 1701fab6c9edc9e759e6f03e0e22f45fa623dea5a8f4d083eee15ff03b1c96e9" address="unix:///run/containerd/s/c2aed803db38a6ee21d9e8938099246e6a9334d5b23ae347eb5ea7a28b5016b7" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:04.022920 containerd[1574]: time="2026-04-24T01:05:04.022667221Z" level=info msg="connecting to shim 8bf81078f974c89eb157491eb153495a2e07812dde3ef9122555d554545fe1fe" address="unix:///run/containerd/s/056e24c6fd4691d61a5a2179dfb20d114a7ff787026e96f0568a15bba4cb18e5" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:04.031719 containerd[1574]: time="2026-04-24T01:05:04.031695188Z" level=info msg="connecting to shim b32e2f75edadc5ab5b3d42d4024bcb21afd7e19cc16d1fcccbc90b8b453b27ed" address="unix:///run/containerd/s/0a000b274f981075d0e3ea1a778c575b19949ff91695c7ffb3cb6ee8738ba794" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:04.061519 systemd[1]: Started cri-containerd-8bf81078f974c89eb157491eb153495a2e07812dde3ef9122555d554545fe1fe.scope - libcontainer container 8bf81078f974c89eb157491eb153495a2e07812dde3ef9122555d554545fe1fe. Apr 24 01:05:04.067147 systemd[1]: Started cri-containerd-1701fab6c9edc9e759e6f03e0e22f45fa623dea5a8f4d083eee15ff03b1c96e9.scope - libcontainer container 1701fab6c9edc9e759e6f03e0e22f45fa623dea5a8f4d083eee15ff03b1c96e9. Apr 24 01:05:04.091357 systemd[1]: Started cri-containerd-b32e2f75edadc5ab5b3d42d4024bcb21afd7e19cc16d1fcccbc90b8b453b27ed.scope - libcontainer container b32e2f75edadc5ab5b3d42d4024bcb21afd7e19cc16d1fcccbc90b8b453b27ed. Apr 24 01:05:04.160961 containerd[1574]: time="2026-04-24T01:05:04.160052323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:29096e223a27687eeb13699b492ec48c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1701fab6c9edc9e759e6f03e0e22f45fa623dea5a8f4d083eee15ff03b1c96e9\"" Apr 24 01:05:04.164039 containerd[1574]: time="2026-04-24T01:05:04.164015204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c6bb8708a026256e82ca4c5631a78b5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bf81078f974c89eb157491eb153495a2e07812dde3ef9122555d554545fe1fe\"" Apr 24 01:05:04.164125 kubelet[2389]: E0424 01:05:04.164034 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:04.165122 kubelet[2389]: E0424 01:05:04.165108 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:04.167780 containerd[1574]: time="2026-04-24T01:05:04.167693845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:824fd89300514e351ed3b68d82c665c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b32e2f75edadc5ab5b3d42d4024bcb21afd7e19cc16d1fcccbc90b8b453b27ed\"" Apr 24 01:05:04.169393 kubelet[2389]: E0424 01:05:04.169160 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:04.171431 containerd[1574]: time="2026-04-24T01:05:04.171300235Z" level=info msg="CreateContainer within sandbox \"1701fab6c9edc9e759e6f03e0e22f45fa623dea5a8f4d083eee15ff03b1c96e9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 01:05:04.173894 containerd[1574]: time="2026-04-24T01:05:04.173457818Z" level=info msg="CreateContainer within sandbox \"8bf81078f974c89eb157491eb153495a2e07812dde3ef9122555d554545fe1fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 01:05:04.181928 containerd[1574]: time="2026-04-24T01:05:04.181542017Z" level=info msg="CreateContainer within sandbox \"b32e2f75edadc5ab5b3d42d4024bcb21afd7e19cc16d1fcccbc90b8b453b27ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 01:05:04.188964 containerd[1574]: time="2026-04-24T01:05:04.188773990Z" level=info msg="Container 5f68bbcb55b84b84367adda41ce30ad47bc7cc76d203d2b575880a098996fde9: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:04.196398 containerd[1574]: time="2026-04-24T01:05:04.196320676Z" level=info msg="Container 16ab28219ae022bafef2c88e9fa0c132bf3d9c6206e78e2eccc8990c79026fa0: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:04.198366 kubelet[2389]: E0424 01:05:04.198176 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 01:05:04.200104 containerd[1574]: time="2026-04-24T01:05:04.200047626Z" level=info msg="CreateContainer within sandbox \"1701fab6c9edc9e759e6f03e0e22f45fa623dea5a8f4d083eee15ff03b1c96e9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f68bbcb55b84b84367adda41ce30ad47bc7cc76d203d2b575880a098996fde9\"" Apr 24 01:05:04.201955 containerd[1574]: time="2026-04-24T01:05:04.201777582Z" level=info msg="Container d43c0419a8a779dd160252cb19a26ddb6ce3ffce5191bf6585ba2a23515b1fe2: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:04.203627 containerd[1574]: time="2026-04-24T01:05:04.203469987Z" level=info msg="StartContainer for \"5f68bbcb55b84b84367adda41ce30ad47bc7cc76d203d2b575880a098996fde9\"" Apr 24 01:05:04.205074 containerd[1574]: time="2026-04-24T01:05:04.204609262Z" level=info msg="connecting to shim 5f68bbcb55b84b84367adda41ce30ad47bc7cc76d203d2b575880a098996fde9" address="unix:///run/containerd/s/c2aed803db38a6ee21d9e8938099246e6a9334d5b23ae347eb5ea7a28b5016b7" protocol=ttrpc version=3 Apr 24 01:05:04.207651 containerd[1574]: time="2026-04-24T01:05:04.207540851Z" level=info msg="CreateContainer within sandbox \"8bf81078f974c89eb157491eb153495a2e07812dde3ef9122555d554545fe1fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16ab28219ae022bafef2c88e9fa0c132bf3d9c6206e78e2eccc8990c79026fa0\"" Apr 24 01:05:04.209294 containerd[1574]: time="2026-04-24T01:05:04.209162339Z" level=info msg="StartContainer for \"16ab28219ae022bafef2c88e9fa0c132bf3d9c6206e78e2eccc8990c79026fa0\"" Apr 24 01:05:04.210633 containerd[1574]: time="2026-04-24T01:05:04.210612779Z" level=info msg="connecting to shim 16ab28219ae022bafef2c88e9fa0c132bf3d9c6206e78e2eccc8990c79026fa0" address="unix:///run/containerd/s/056e24c6fd4691d61a5a2179dfb20d114a7ff787026e96f0568a15bba4cb18e5" protocol=ttrpc version=3 Apr 24 01:05:04.215448 containerd[1574]: time="2026-04-24T01:05:04.215381812Z" level=info msg="CreateContainer within sandbox \"b32e2f75edadc5ab5b3d42d4024bcb21afd7e19cc16d1fcccbc90b8b453b27ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d43c0419a8a779dd160252cb19a26ddb6ce3ffce5191bf6585ba2a23515b1fe2\"" Apr 24 01:05:04.217323 containerd[1574]: time="2026-04-24T01:05:04.217307313Z" level=info msg="StartContainer for \"d43c0419a8a779dd160252cb19a26ddb6ce3ffce5191bf6585ba2a23515b1fe2\"" Apr 24 01:05:04.218145 containerd[1574]: time="2026-04-24T01:05:04.218105502Z" level=info msg="connecting to shim d43c0419a8a779dd160252cb19a26ddb6ce3ffce5191bf6585ba2a23515b1fe2" address="unix:///run/containerd/s/0a000b274f981075d0e3ea1a778c575b19949ff91695c7ffb3cb6ee8738ba794" protocol=ttrpc version=3 Apr 24 01:05:04.239169 systemd[1]: Started cri-containerd-16ab28219ae022bafef2c88e9fa0c132bf3d9c6206e78e2eccc8990c79026fa0.scope - libcontainer container 16ab28219ae022bafef2c88e9fa0c132bf3d9c6206e78e2eccc8990c79026fa0. Apr 24 01:05:04.240717 systemd[1]: Started cri-containerd-5f68bbcb55b84b84367adda41ce30ad47bc7cc76d203d2b575880a098996fde9.scope - libcontainer container 5f68bbcb55b84b84367adda41ce30ad47bc7cc76d203d2b575880a098996fde9. Apr 24 01:05:04.251287 systemd[1]: Started cri-containerd-d43c0419a8a779dd160252cb19a26ddb6ce3ffce5191bf6585ba2a23515b1fe2.scope - libcontainer container d43c0419a8a779dd160252cb19a26ddb6ce3ffce5191bf6585ba2a23515b1fe2. Apr 24 01:05:04.314894 containerd[1574]: time="2026-04-24T01:05:04.314294007Z" level=info msg="StartContainer for \"d43c0419a8a779dd160252cb19a26ddb6ce3ffce5191bf6585ba2a23515b1fe2\" returns successfully" Apr 24 01:05:04.331920 containerd[1574]: time="2026-04-24T01:05:04.331210605Z" level=info msg="StartContainer for \"5f68bbcb55b84b84367adda41ce30ad47bc7cc76d203d2b575880a098996fde9\" returns successfully" Apr 24 01:05:04.348540 containerd[1574]: time="2026-04-24T01:05:04.348453207Z" level=info msg="StartContainer for \"16ab28219ae022bafef2c88e9fa0c132bf3d9c6206e78e2eccc8990c79026fa0\" returns successfully" Apr 24 01:05:04.411990 kubelet[2389]: E0424 01:05:04.411434 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.5:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 01:05:04.514534 kubelet[2389]: E0424 01:05:04.514434 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.5:6443: connect: connection refused" interval="1.6s" Apr 24 01:05:04.684754 kubelet[2389]: I0424 01:05:04.684520 2389 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 01:05:05.191090 kubelet[2389]: E0424 01:05:05.189941 2389 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 01:05:05.191090 kubelet[2389]: E0424 01:05:05.190720 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:05.197928 kubelet[2389]: E0424 01:05:05.195458 2389 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 01:05:05.204191 kubelet[2389]: E0424 01:05:05.204133 2389 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 24 01:05:05.204585 kubelet[2389]: E0424 01:05:05.204407 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:05.206313 kubelet[2389]: E0424 01:05:05.206048 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:05.913999 kubelet[2389]: I0424 01:05:05.913914 2389 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 01:05:05.913999 kubelet[2389]: E0424 01:05:05.913945 2389 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 24 01:05:06.011117 kubelet[2389]: I0424 01:05:06.010939 2389 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:06.021532 kubelet[2389]: E0424 01:05:06.021450 2389 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:06.021532 kubelet[2389]: I0424 01:05:06.021521 2389 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:06.023644 kubelet[2389]: E0424 01:05:06.023456 2389 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:06.023644 kubelet[2389]: I0424 01:05:06.023516 2389 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:06.025314 kubelet[2389]: E0424 01:05:06.025257 2389 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:06.097071 kubelet[2389]: I0424 01:05:06.096471 2389 apiserver.go:52] "Watching apiserver" Apr 24 01:05:06.109871 kubelet[2389]: I0424 01:05:06.109706 2389 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 01:05:06.205290 kubelet[2389]: I0424 01:05:06.205036 2389 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:06.205392 kubelet[2389]: I0424 01:05:06.205312 2389 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:06.205751 kubelet[2389]: I0424 01:05:06.205118 2389 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:06.208196 kubelet[2389]: E0424 01:05:06.208106 2389 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:06.208326 kubelet[2389]: E0424 01:05:06.208251 2389 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:06.208432 kubelet[2389]: E0424 01:05:06.208359 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:06.208525 kubelet[2389]: E0424 01:05:06.208480 2389 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:06.208525 kubelet[2389]: E0424 01:05:06.208446 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:06.208734 kubelet[2389]: E0424 01:05:06.208657 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:07.341977 kubelet[2389]: I0424 01:05:07.341612 2389 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:07.460054 kubelet[2389]: E0424 01:05:07.459665 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:08.330419 kubelet[2389]: E0424 01:05:08.330350 2389 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:08.950556 systemd[1]: Reload requested from client PID 2676 ('systemctl') (unit session-9.scope)... Apr 24 01:05:08.950603 systemd[1]: Reloading... Apr 24 01:05:09.046966 zram_generator::config[2719]: No configuration found. Apr 24 01:05:09.250374 systemd[1]: Reloading finished in 299 ms. Apr 24 01:05:09.277492 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 01:05:09.291014 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 01:05:09.291325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:05:09.291393 systemd[1]: kubelet.service: Consumed 1.473s CPU time, 127.2M memory peak. Apr 24 01:05:09.294047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 01:05:09.464021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 01:05:09.475369 (kubelet)[2764]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 01:05:09.554207 kubelet[2764]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 01:05:09.554915 kubelet[2764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 01:05:09.554915 kubelet[2764]: I0424 01:05:09.554504 2764 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 01:05:09.566782 kubelet[2764]: I0424 01:05:09.566631 2764 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 24 01:05:09.566782 kubelet[2764]: I0424 01:05:09.566687 2764 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 01:05:09.566782 kubelet[2764]: I0424 01:05:09.566709 2764 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 01:05:09.566782 kubelet[2764]: I0424 01:05:09.566718 2764 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 01:05:09.566998 kubelet[2764]: I0424 01:05:09.566956 2764 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 01:05:09.568238 kubelet[2764]: I0424 01:05:09.568149 2764 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 01:05:09.571348 kubelet[2764]: I0424 01:05:09.570974 2764 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 01:05:09.593899 kubelet[2764]: I0424 01:05:09.593705 2764 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 01:05:09.602612 kubelet[2764]: I0424 01:05:09.602595 2764 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 01:05:09.602959 kubelet[2764]: I0424 01:05:09.602940 2764 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 01:05:09.604255 kubelet[2764]: I0424 01:05:09.603006 2764 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 01:05:09.604418 kubelet[2764]: I0424 01:05:09.604305 2764 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 01:05:09.604418 kubelet[2764]: I0424 01:05:09.604316 2764 container_manager_linux.go:306] "Creating device plugin manager" Apr 24 01:05:09.604454 kubelet[2764]: I0424 01:05:09.604421 2764 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 01:05:09.605351 kubelet[2764]: I0424 01:05:09.605305 2764 state_mem.go:36] "Initialized new in-memory state store" Apr 24 01:05:09.605531 kubelet[2764]: I0424 01:05:09.605470 2764 kubelet.go:475] "Attempting to sync node with API server" Apr 24 01:05:09.605531 kubelet[2764]: I0424 01:05:09.605534 2764 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 01:05:09.605621 kubelet[2764]: I0424 01:05:09.605564 2764 kubelet.go:387] "Adding apiserver pod source" Apr 24 01:05:09.605621 kubelet[2764]: I0424 01:05:09.605576 2764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 01:05:09.606904 kubelet[2764]: I0424 01:05:09.606481 2764 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 01:05:09.607253 kubelet[2764]: I0424 01:05:09.607207 2764 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 01:05:09.607292 kubelet[2764]: I0424 01:05:09.607261 2764 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 01:05:09.620409 kubelet[2764]: I0424 01:05:09.620249 2764 server.go:1262] "Started kubelet" Apr 24 01:05:09.620409 kubelet[2764]: I0424 01:05:09.620390 2764 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 01:05:09.620683 kubelet[2764]: I0424 01:05:09.620526 2764 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 01:05:09.620683 kubelet[2764]: I0424 01:05:09.620569 2764 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 01:05:09.621031 kubelet[2764]: I0424 01:05:09.620983 2764 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 01:05:09.626239 kubelet[2764]: I0424 01:05:09.626148 2764 server.go:310] "Adding debug handlers to kubelet server" Apr 24 01:05:09.630282 kubelet[2764]: I0424 01:05:09.630267 2764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 01:05:09.630463 kubelet[2764]: I0424 01:05:09.630365 2764 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 01:05:09.630710 kubelet[2764]: I0424 01:05:09.630643 2764 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 24 01:05:09.632527 kubelet[2764]: I0424 01:05:09.631476 2764 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 01:05:09.632527 kubelet[2764]: I0424 01:05:09.631628 2764 reconciler.go:29] "Reconciler: start to sync state" Apr 24 01:05:09.633278 kubelet[2764]: I0424 01:05:09.633263 2764 factory.go:223] Registration of the systemd container factory successfully Apr 24 01:05:09.633435 kubelet[2764]: I0424 01:05:09.633417 2764 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 01:05:09.635362 kubelet[2764]: E0424 01:05:09.634409 2764 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 01:05:09.637065 kubelet[2764]: I0424 01:05:09.636555 2764 factory.go:223] Registration of the containerd container factory successfully Apr 24 01:05:09.671248 kubelet[2764]: I0424 01:05:09.671000 2764 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 01:05:09.675624 kubelet[2764]: I0424 01:05:09.675606 2764 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 01:05:09.675748 kubelet[2764]: I0424 01:05:09.675744 2764 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 24 01:05:09.675795 kubelet[2764]: I0424 01:05:09.675791 2764 kubelet.go:2428] "Starting kubelet main sync loop" Apr 24 01:05:09.675967 kubelet[2764]: E0424 01:05:09.675955 2764 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 01:05:09.703075 kubelet[2764]: I0424 01:05:09.702914 2764 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 01:05:09.703075 kubelet[2764]: I0424 01:05:09.703010 2764 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 01:05:09.703075 kubelet[2764]: I0424 01:05:09.703028 2764 state_mem.go:36] "Initialized new in-memory state store" Apr 24 01:05:09.703268 kubelet[2764]: I0424 01:05:09.703116 2764 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 01:05:09.703268 kubelet[2764]: I0424 01:05:09.703126 2764 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 01:05:09.703268 kubelet[2764]: I0424 01:05:09.703139 2764 policy_none.go:49] "None policy: Start" Apr 24 01:05:09.703268 kubelet[2764]: I0424 01:05:09.703148 2764 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 01:05:09.703268 kubelet[2764]: I0424 01:05:09.703154 2764 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 01:05:09.703268 kubelet[2764]: I0424 01:05:09.703269 2764 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 01:05:09.703434 kubelet[2764]: I0424 01:05:09.703274 2764 policy_none.go:47] "Start" Apr 24 01:05:09.714989 kubelet[2764]: E0424 01:05:09.714948 2764 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 01:05:09.715430 kubelet[2764]: I0424 01:05:09.715338 2764 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 01:05:09.715602 kubelet[2764]: I0424 01:05:09.715476 2764 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 01:05:09.716290 kubelet[2764]: I0424 01:05:09.716117 2764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 01:05:09.719099 kubelet[2764]: E0424 01:05:09.719034 2764 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 01:05:09.778616 kubelet[2764]: I0424 01:05:09.778459 2764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:09.778616 kubelet[2764]: I0424 01:05:09.778593 2764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:09.778787 kubelet[2764]: I0424 01:05:09.778439 2764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:09.791904 kubelet[2764]: E0424 01:05:09.790995 2764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:09.826340 kubelet[2764]: I0424 01:05:09.826154 2764 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 24 01:05:09.833432 kubelet[2764]: I0424 01:05:09.833153 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:09.833432 kubelet[2764]: I0424 01:05:09.833247 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:09.834111 kubelet[2764]: I0424 01:05:09.833263 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:09.834283 kubelet[2764]: I0424 01:05:09.834265 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/824fd89300514e351ed3b68d82c665c6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"824fd89300514e351ed3b68d82c665c6\") " pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:09.834522 kubelet[2764]: I0424 01:05:09.834363 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29096e223a27687eeb13699b492ec48c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"29096e223a27687eeb13699b492ec48c\") " pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:09.834522 kubelet[2764]: I0424 01:05:09.834431 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29096e223a27687eeb13699b492ec48c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"29096e223a27687eeb13699b492ec48c\") " pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:09.834522 kubelet[2764]: I0424 01:05:09.834448 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29096e223a27687eeb13699b492ec48c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"29096e223a27687eeb13699b492ec48c\") " pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:09.834522 kubelet[2764]: I0424 01:05:09.834461 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:09.834522 kubelet[2764]: I0424 01:05:09.834476 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c6bb8708a026256e82ca4c5631a78b5a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c6bb8708a026256e82ca4c5631a78b5a\") " pod="kube-system/kube-controller-manager-localhost" Apr 24 01:05:09.840101 kubelet[2764]: I0424 01:05:09.839777 2764 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 24 01:05:09.840101 kubelet[2764]: I0424 01:05:09.839968 2764 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 24 01:05:10.090052 kubelet[2764]: E0424 01:05:10.088423 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:10.090325 kubelet[2764]: E0424 01:05:10.090312 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:10.091807 kubelet[2764]: E0424 01:05:10.091758 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:10.608389 kubelet[2764]: I0424 01:05:10.607274 2764 apiserver.go:52] "Watching apiserver" Apr 24 01:05:10.632518 kubelet[2764]: I0424 01:05:10.632390 2764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 01:05:10.680410 kubelet[2764]: I0424 01:05:10.679743 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.67972621 podStartE2EDuration="1.67972621s" podCreationTimestamp="2026-04-24 01:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 01:05:10.664515659 +0000 UTC m=+1.183265873" watchObservedRunningTime="2026-04-24 01:05:10.67972621 +0000 UTC m=+1.198476413" Apr 24 01:05:10.696660 kubelet[2764]: I0424 01:05:10.696488 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.696471743 podStartE2EDuration="3.696471743s" podCreationTimestamp="2026-04-24 01:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 01:05:10.680450895 +0000 UTC m=+1.199201109" watchObservedRunningTime="2026-04-24 01:05:10.696471743 +0000 UTC m=+1.215221954" Apr 24 01:05:10.696660 kubelet[2764]: I0424 01:05:10.696613 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.696608831 podStartE2EDuration="1.696608831s" podCreationTimestamp="2026-04-24 01:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 01:05:10.696360956 +0000 UTC m=+1.215111170" watchObservedRunningTime="2026-04-24 01:05:10.696608831 +0000 UTC m=+1.215359045" Apr 24 01:05:10.705224 kubelet[2764]: I0424 01:05:10.705053 2764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:10.705300 kubelet[2764]: I0424 01:05:10.705262 2764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:10.707273 kubelet[2764]: E0424 01:05:10.707108 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:10.719883 kubelet[2764]: E0424 01:05:10.719645 2764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 24 01:05:10.720705 kubelet[2764]: E0424 01:05:10.720380 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:10.723105 kubelet[2764]: E0424 01:05:10.723037 2764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 24 01:05:10.723285 kubelet[2764]: E0424 01:05:10.723235 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:11.709914 kubelet[2764]: E0424 01:05:11.709748 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:11.709914 kubelet[2764]: E0424 01:05:11.709755 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:12.713034 kubelet[2764]: E0424 01:05:12.712928 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:14.129520 kubelet[2764]: I0424 01:05:14.129483 2764 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 01:05:14.130756 containerd[1574]: time="2026-04-24T01:05:14.130688206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 01:05:14.131385 kubelet[2764]: I0424 01:05:14.131184 2764 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 01:05:14.831424 systemd[1]: Created slice kubepods-besteffort-pod5faa2ede_4cdb_42cb_8d9a_4bcb191a72b9.slice - libcontainer container kubepods-besteffort-pod5faa2ede_4cdb_42cb_8d9a_4bcb191a72b9.slice. Apr 24 01:05:14.868020 kubelet[2764]: I0424 01:05:14.867757 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhzh\" (UniqueName: \"kubernetes.io/projected/5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9-kube-api-access-2dhzh\") pod \"kube-proxy-5n2nc\" (UID: \"5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9\") " pod="kube-system/kube-proxy-5n2nc" Apr 24 01:05:14.868020 kubelet[2764]: I0424 01:05:14.867975 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9-xtables-lock\") pod \"kube-proxy-5n2nc\" (UID: \"5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9\") " pod="kube-system/kube-proxy-5n2nc" Apr 24 01:05:14.868020 kubelet[2764]: I0424 01:05:14.867998 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9-kube-proxy\") pod \"kube-proxy-5n2nc\" (UID: \"5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9\") " pod="kube-system/kube-proxy-5n2nc" Apr 24 01:05:14.868020 kubelet[2764]: I0424 01:05:14.868011 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9-lib-modules\") pod \"kube-proxy-5n2nc\" (UID: \"5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9\") " pod="kube-system/kube-proxy-5n2nc" Apr 24 01:05:14.974761 kubelet[2764]: E0424 01:05:14.974736 2764 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 24 01:05:14.974761 kubelet[2764]: E0424 01:05:14.974755 2764 projected.go:196] Error preparing data for projected volume kube-api-access-2dhzh for pod kube-system/kube-proxy-5n2nc: configmap "kube-root-ca.crt" not found Apr 24 01:05:14.975017 kubelet[2764]: E0424 01:05:14.974947 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9-kube-api-access-2dhzh podName:5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9 nodeName:}" failed. No retries permitted until 2026-04-24 01:05:15.474929377 +0000 UTC m=+5.993679580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2dhzh" (UniqueName: "kubernetes.io/projected/5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9-kube-api-access-2dhzh") pod "kube-proxy-5n2nc" (UID: "5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9") : configmap "kube-root-ca.crt" not found Apr 24 01:05:15.353110 systemd[1]: Created slice kubepods-besteffort-podeb275a47_c9b9_4758_85b8_64c9d2acc1b6.slice - libcontainer container kubepods-besteffort-podeb275a47_c9b9_4758_85b8_64c9d2acc1b6.slice. Apr 24 01:05:15.372600 kubelet[2764]: I0424 01:05:15.372245 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcr7p\" (UniqueName: \"kubernetes.io/projected/eb275a47-c9b9-4758-85b8-64c9d2acc1b6-kube-api-access-tcr7p\") pod \"tigera-operator-6fb8d665dd-6v5pw\" (UID: \"eb275a47-c9b9-4758-85b8-64c9d2acc1b6\") " pod="tigera-operator/tigera-operator-6fb8d665dd-6v5pw" Apr 24 01:05:15.372600 kubelet[2764]: I0424 01:05:15.372527 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb275a47-c9b9-4758-85b8-64c9d2acc1b6-var-lib-calico\") pod \"tigera-operator-6fb8d665dd-6v5pw\" (UID: \"eb275a47-c9b9-4758-85b8-64c9d2acc1b6\") " pod="tigera-operator/tigera-operator-6fb8d665dd-6v5pw" Apr 24 01:05:15.662101 containerd[1574]: time="2026-04-24T01:05:15.662032378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-6v5pw,Uid:eb275a47-c9b9-4758-85b8-64c9d2acc1b6,Namespace:tigera-operator,Attempt:0,}" Apr 24 01:05:15.692464 containerd[1574]: time="2026-04-24T01:05:15.692285432Z" level=info msg="connecting to shim bca9b8bb150ca0961114c411596426b2b0a95d4e2c948213bb195ff95494afc9" address="unix:///run/containerd/s/cdb0699b904aa9d34732d2f68646b34e121fffbde3e47c60a8fb2fbe4e979cdd" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:15.719059 systemd[1]: Started cri-containerd-bca9b8bb150ca0961114c411596426b2b0a95d4e2c948213bb195ff95494afc9.scope - libcontainer container bca9b8bb150ca0961114c411596426b2b0a95d4e2c948213bb195ff95494afc9. Apr 24 01:05:15.729005 update_engine[1568]: I20260424 01:05:15.728924 1568 update_attempter.cc:509] Updating boot flags... Apr 24 01:05:15.744987 kubelet[2764]: E0424 01:05:15.744782 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:15.750639 containerd[1574]: time="2026-04-24T01:05:15.750587251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5n2nc,Uid:5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9,Namespace:kube-system,Attempt:0,}" Apr 24 01:05:15.842516 containerd[1574]: time="2026-04-24T01:05:15.842401575Z" level=info msg="connecting to shim dfd7ee7190cbe1683a495f91829b693321b41155a797b3f138b2195b353cb898" address="unix:///run/containerd/s/a10feefe8881854ead30b9d38e065296ead8c58fb3f9eab2cc0c7e7d40d6d4cf" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:15.849526 containerd[1574]: time="2026-04-24T01:05:15.849430565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6fb8d665dd-6v5pw,Uid:eb275a47-c9b9-4758-85b8-64c9d2acc1b6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bca9b8bb150ca0961114c411596426b2b0a95d4e2c948213bb195ff95494afc9\"" Apr 24 01:05:15.858381 containerd[1574]: time="2026-04-24T01:05:15.858115610Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 24 01:05:15.899068 systemd[1]: Started cri-containerd-dfd7ee7190cbe1683a495f91829b693321b41155a797b3f138b2195b353cb898.scope - libcontainer container dfd7ee7190cbe1683a495f91829b693321b41155a797b3f138b2195b353cb898. Apr 24 01:05:15.942706 containerd[1574]: time="2026-04-24T01:05:15.942548198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5n2nc,Uid:5faa2ede-4cdb-42cb-8d9a-4bcb191a72b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfd7ee7190cbe1683a495f91829b693321b41155a797b3f138b2195b353cb898\"" Apr 24 01:05:15.943596 kubelet[2764]: E0424 01:05:15.943541 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:15.952536 containerd[1574]: time="2026-04-24T01:05:15.952449554Z" level=info msg="CreateContainer within sandbox \"dfd7ee7190cbe1683a495f91829b693321b41155a797b3f138b2195b353cb898\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 01:05:15.964157 containerd[1574]: time="2026-04-24T01:05:15.964031654Z" level=info msg="Container feae728cc8f591af8b65d9caf4a9089ea124c3e3156282e322cb032325d704dc: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:15.972031 containerd[1574]: time="2026-04-24T01:05:15.971955143Z" level=info msg="CreateContainer within sandbox \"dfd7ee7190cbe1683a495f91829b693321b41155a797b3f138b2195b353cb898\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"feae728cc8f591af8b65d9caf4a9089ea124c3e3156282e322cb032325d704dc\"" Apr 24 01:05:15.973102 containerd[1574]: time="2026-04-24T01:05:15.972806510Z" level=info msg="StartContainer for \"feae728cc8f591af8b65d9caf4a9089ea124c3e3156282e322cb032325d704dc\"" Apr 24 01:05:15.974550 containerd[1574]: time="2026-04-24T01:05:15.974363506Z" level=info msg="connecting to shim feae728cc8f591af8b65d9caf4a9089ea124c3e3156282e322cb032325d704dc" address="unix:///run/containerd/s/a10feefe8881854ead30b9d38e065296ead8c58fb3f9eab2cc0c7e7d40d6d4cf" protocol=ttrpc version=3 Apr 24 01:05:15.998203 systemd[1]: Started cri-containerd-feae728cc8f591af8b65d9caf4a9089ea124c3e3156282e322cb032325d704dc.scope - libcontainer container feae728cc8f591af8b65d9caf4a9089ea124c3e3156282e322cb032325d704dc. Apr 24 01:05:16.033893 kubelet[2764]: E0424 01:05:16.033690 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:16.081225 containerd[1574]: time="2026-04-24T01:05:16.081052489Z" level=info msg="StartContainer for \"feae728cc8f591af8b65d9caf4a9089ea124c3e3156282e322cb032325d704dc\" returns successfully" Apr 24 01:05:16.727181 kubelet[2764]: E0424 01:05:16.726996 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:16.730065 kubelet[2764]: E0424 01:05:16.730012 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:16.753951 kubelet[2764]: I0424 01:05:16.753911 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5n2nc" podStartSLOduration=2.7538076350000003 podStartE2EDuration="2.753807635s" podCreationTimestamp="2026-04-24 01:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 01:05:16.739811642 +0000 UTC m=+7.258561859" watchObservedRunningTime="2026-04-24 01:05:16.753807635 +0000 UTC m=+7.272557849" Apr 24 01:05:17.149990 kubelet[2764]: E0424 01:05:17.149233 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:17.417532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount319147303.mount: Deactivated successfully. Apr 24 01:05:17.732425 kubelet[2764]: E0424 01:05:17.731918 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:18.396279 containerd[1574]: time="2026-04-24T01:05:18.396213906Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:18.396977 containerd[1574]: time="2026-04-24T01:05:18.396858152Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 24 01:05:18.397966 containerd[1574]: time="2026-04-24T01:05:18.397886244Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:18.399807 containerd[1574]: time="2026-04-24T01:05:18.399741337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:18.400393 containerd[1574]: time="2026-04-24T01:05:18.400328021Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 2.542103757s" Apr 24 01:05:18.400393 containerd[1574]: time="2026-04-24T01:05:18.400384782Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 24 01:05:18.405494 containerd[1574]: time="2026-04-24T01:05:18.405443228Z" level=info msg="CreateContainer within sandbox \"bca9b8bb150ca0961114c411596426b2b0a95d4e2c948213bb195ff95494afc9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 01:05:18.412511 containerd[1574]: time="2026-04-24T01:05:18.412490954Z" level=info msg="Container 54250c6142d15a49bd474d47071cfaa8afe2d96bae4c976251fef37082643955: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:18.420294 containerd[1574]: time="2026-04-24T01:05:18.420230963Z" level=info msg="CreateContainer within sandbox \"bca9b8bb150ca0961114c411596426b2b0a95d4e2c948213bb195ff95494afc9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54250c6142d15a49bd474d47071cfaa8afe2d96bae4c976251fef37082643955\"" Apr 24 01:05:18.421459 containerd[1574]: time="2026-04-24T01:05:18.421444057Z" level=info msg="StartContainer for \"54250c6142d15a49bd474d47071cfaa8afe2d96bae4c976251fef37082643955\"" Apr 24 01:05:18.422625 containerd[1574]: time="2026-04-24T01:05:18.422541594Z" level=info msg="connecting to shim 54250c6142d15a49bd474d47071cfaa8afe2d96bae4c976251fef37082643955" address="unix:///run/containerd/s/cdb0699b904aa9d34732d2f68646b34e121fffbde3e47c60a8fb2fbe4e979cdd" protocol=ttrpc version=3 Apr 24 01:05:18.458077 systemd[1]: Started cri-containerd-54250c6142d15a49bd474d47071cfaa8afe2d96bae4c976251fef37082643955.scope - libcontainer container 54250c6142d15a49bd474d47071cfaa8afe2d96bae4c976251fef37082643955. Apr 24 01:05:18.494389 containerd[1574]: time="2026-04-24T01:05:18.494217387Z" level=info msg="StartContainer for \"54250c6142d15a49bd474d47071cfaa8afe2d96bae4c976251fef37082643955\" returns successfully" Apr 24 01:05:18.738470 kubelet[2764]: E0424 01:05:18.737511 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:21.525139 kubelet[2764]: E0424 01:05:21.524922 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:21.548900 kubelet[2764]: I0424 01:05:21.547983 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6fb8d665dd-6v5pw" podStartSLOduration=4.000337051 podStartE2EDuration="6.547971669s" podCreationTimestamp="2026-04-24 01:05:15 +0000 UTC" firstStartedPulling="2026-04-24 01:05:15.853731583 +0000 UTC m=+6.372481787" lastFinishedPulling="2026-04-24 01:05:18.401366203 +0000 UTC m=+8.920116405" observedRunningTime="2026-04-24 01:05:18.754449118 +0000 UTC m=+9.273199369" watchObservedRunningTime="2026-04-24 01:05:21.547971669 +0000 UTC m=+12.066722047" Apr 24 01:05:24.231337 sudo[1802]: pam_unix(sudo:session): session closed for user root Apr 24 01:05:24.233010 sshd[1801]: Connection closed by 10.0.0.1 port 60610 Apr 24 01:05:24.234957 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Apr 24 01:05:24.238713 systemd[1]: sshd@8-10.0.0.5:22-10.0.0.1:60610.service: Deactivated successfully. Apr 24 01:05:24.243556 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 01:05:24.244945 systemd[1]: session-9.scope: Consumed 6.145s CPU time, 224M memory peak. Apr 24 01:05:24.247191 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Apr 24 01:05:24.248724 systemd-logind[1560]: Removed session 9. Apr 24 01:05:26.240309 systemd[1]: Created slice kubepods-besteffort-pode3a21dbf_c5de_4b7a_9664_27757e67640c.slice - libcontainer container kubepods-besteffort-pode3a21dbf_c5de_4b7a_9664_27757e67640c.slice. Apr 24 01:05:26.305521 systemd[1]: Created slice kubepods-besteffort-podb529c0a8_b0f7_4de8_bcae_faaa22f11e17.slice - libcontainer container kubepods-besteffort-podb529c0a8_b0f7_4de8_bcae_faaa22f11e17.slice. Apr 24 01:05:26.355681 kubelet[2764]: I0424 01:05:26.355587 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3a21dbf-c5de-4b7a-9664-27757e67640c-tigera-ca-bundle\") pod \"calico-typha-6bf944ff7f-d7vrl\" (UID: \"e3a21dbf-c5de-4b7a-9664-27757e67640c\") " pod="calico-system/calico-typha-6bf944ff7f-d7vrl" Apr 24 01:05:26.356068 kubelet[2764]: I0424 01:05:26.355725 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e3a21dbf-c5de-4b7a-9664-27757e67640c-typha-certs\") pod \"calico-typha-6bf944ff7f-d7vrl\" (UID: \"e3a21dbf-c5de-4b7a-9664-27757e67640c\") " pod="calico-system/calico-typha-6bf944ff7f-d7vrl" Apr 24 01:05:26.356068 kubelet[2764]: I0424 01:05:26.355814 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8ns\" (UniqueName: \"kubernetes.io/projected/e3a21dbf-c5de-4b7a-9664-27757e67640c-kube-api-access-ch8ns\") pod \"calico-typha-6bf944ff7f-d7vrl\" (UID: \"e3a21dbf-c5de-4b7a-9664-27757e67640c\") " pod="calico-system/calico-typha-6bf944ff7f-d7vrl" Apr 24 01:05:26.403717 kubelet[2764]: E0424 01:05:26.403638 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:26.456606 kubelet[2764]: I0424 01:05:26.456387 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-tigera-ca-bundle\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.456606 kubelet[2764]: I0424 01:05:26.456442 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-xtables-lock\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.456606 kubelet[2764]: I0424 01:05:26.456464 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-lib-modules\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.456606 kubelet[2764]: I0424 01:05:26.456477 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-var-run-calico\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.456606 kubelet[2764]: I0424 01:05:26.456488 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-flexvol-driver-host\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457086 kubelet[2764]: I0424 01:05:26.456521 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-nodeproc\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457086 kubelet[2764]: I0424 01:05:26.456542 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-sys-fs\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457086 kubelet[2764]: I0424 01:05:26.456592 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-policysync\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457086 kubelet[2764]: I0424 01:05:26.456602 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-cni-bin-dir\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457086 kubelet[2764]: I0424 01:05:26.456642 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-bpffs\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457086 kubelet[2764]: I0424 01:05:26.456652 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-cni-log-dir\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457372 kubelet[2764]: I0424 01:05:26.456664 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56tf\" (UniqueName: \"kubernetes.io/projected/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-kube-api-access-k56tf\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457372 kubelet[2764]: I0424 01:05:26.456676 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-var-lib-calico\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457372 kubelet[2764]: I0424 01:05:26.456688 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-cni-net-dir\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.457372 kubelet[2764]: I0424 01:05:26.456701 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b529c0a8-b0f7-4de8-bcae-faaa22f11e17-node-certs\") pod \"calico-node-d64nk\" (UID: \"b529c0a8-b0f7-4de8-bcae-faaa22f11e17\") " pod="calico-system/calico-node-d64nk" Apr 24 01:05:26.550041 kubelet[2764]: E0424 01:05:26.549756 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:26.551499 containerd[1574]: time="2026-04-24T01:05:26.551401374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf944ff7f-d7vrl,Uid:e3a21dbf-c5de-4b7a-9664-27757e67640c,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:26.557582 kubelet[2764]: I0424 01:05:26.557477 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/604703a5-be99-4439-bd36-95d174df6415-socket-dir\") pod \"csi-node-driver-vf49l\" (UID: \"604703a5-be99-4439-bd36-95d174df6415\") " pod="calico-system/csi-node-driver-vf49l" Apr 24 01:05:26.559064 kubelet[2764]: I0424 01:05:26.558692 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/604703a5-be99-4439-bd36-95d174df6415-registration-dir\") pod \"csi-node-driver-vf49l\" (UID: \"604703a5-be99-4439-bd36-95d174df6415\") " pod="calico-system/csi-node-driver-vf49l" Apr 24 01:05:26.559064 kubelet[2764]: I0424 01:05:26.558764 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/604703a5-be99-4439-bd36-95d174df6415-varrun\") pod \"csi-node-driver-vf49l\" (UID: \"604703a5-be99-4439-bd36-95d174df6415\") " pod="calico-system/csi-node-driver-vf49l" Apr 24 01:05:26.559064 kubelet[2764]: I0424 01:05:26.558779 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/604703a5-be99-4439-bd36-95d174df6415-kubelet-dir\") pod \"csi-node-driver-vf49l\" (UID: \"604703a5-be99-4439-bd36-95d174df6415\") " pod="calico-system/csi-node-driver-vf49l" Apr 24 01:05:26.559064 kubelet[2764]: I0424 01:05:26.558791 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcz8\" (UniqueName: \"kubernetes.io/projected/604703a5-be99-4439-bd36-95d174df6415-kube-api-access-kfcz8\") pod \"csi-node-driver-vf49l\" (UID: \"604703a5-be99-4439-bd36-95d174df6415\") " pod="calico-system/csi-node-driver-vf49l" Apr 24 01:05:26.560484 kubelet[2764]: E0424 01:05:26.560363 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.560484 kubelet[2764]: W0424 01:05:26.560416 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.560484 kubelet[2764]: E0424 01:05:26.560435 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.563451 kubelet[2764]: E0424 01:05:26.560665 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.563451 kubelet[2764]: W0424 01:05:26.560671 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.563451 kubelet[2764]: E0424 01:05:26.560677 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.563451 kubelet[2764]: E0424 01:05:26.561726 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.563451 kubelet[2764]: W0424 01:05:26.561736 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.563451 kubelet[2764]: E0424 01:05:26.561745 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.563451 kubelet[2764]: E0424 01:05:26.562046 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.563451 kubelet[2764]: W0424 01:05:26.562052 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.563451 kubelet[2764]: E0424 01:05:26.562059 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.563451 kubelet[2764]: E0424 01:05:26.562934 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.563792 kubelet[2764]: W0424 01:05:26.562942 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.563792 kubelet[2764]: E0424 01:05:26.562951 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.564339 kubelet[2764]: E0424 01:05:26.563519 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.564339 kubelet[2764]: W0424 01:05:26.564158 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.564339 kubelet[2764]: E0424 01:05:26.564167 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.566627 kubelet[2764]: E0424 01:05:26.566317 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.566627 kubelet[2764]: W0424 01:05:26.566515 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.569031 kubelet[2764]: E0424 01:05:26.568934 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.570063 kubelet[2764]: E0424 01:05:26.569980 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.570063 kubelet[2764]: W0424 01:05:26.570027 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.570063 kubelet[2764]: E0424 01:05:26.570036 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.571580 kubelet[2764]: E0424 01:05:26.571467 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.571580 kubelet[2764]: W0424 01:05:26.571477 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.571580 kubelet[2764]: E0424 01:05:26.571554 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.572550 kubelet[2764]: E0424 01:05:26.572348 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.573929 kubelet[2764]: W0424 01:05:26.572617 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.573929 kubelet[2764]: E0424 01:05:26.572627 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.573929 kubelet[2764]: E0424 01:05:26.573201 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.573929 kubelet[2764]: W0424 01:05:26.573208 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.573929 kubelet[2764]: E0424 01:05:26.573216 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.576724 kubelet[2764]: E0424 01:05:26.576629 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.576724 kubelet[2764]: W0424 01:05:26.576697 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.576724 kubelet[2764]: E0424 01:05:26.576709 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.620678 containerd[1574]: time="2026-04-24T01:05:26.620394224Z" level=info msg="connecting to shim 36ff336acf634ff7946a0f6c3bdf50abb39e08ed8281eccceee657332894526d" address="unix:///run/containerd/s/ceba08babab650c3b58f336b44318c4173ff164c3b1629fe0ce2ded652a85608" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:26.623726 containerd[1574]: time="2026-04-24T01:05:26.623655845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d64nk,Uid:b529c0a8-b0f7-4de8-bcae-faaa22f11e17,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:26.662278 kubelet[2764]: E0424 01:05:26.662208 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.662278 kubelet[2764]: W0424 01:05:26.662270 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.662278 kubelet[2764]: E0424 01:05:26.662287 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.662763 kubelet[2764]: E0424 01:05:26.662730 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.662763 kubelet[2764]: W0424 01:05:26.662742 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.662763 kubelet[2764]: E0424 01:05:26.662754 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.663526 kubelet[2764]: E0424 01:05:26.663500 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.663526 kubelet[2764]: W0424 01:05:26.663509 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.663526 kubelet[2764]: E0424 01:05:26.663516 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.664324 kubelet[2764]: E0424 01:05:26.664193 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.664324 kubelet[2764]: W0424 01:05:26.664208 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.664324 kubelet[2764]: E0424 01:05:26.664218 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.666475 kubelet[2764]: E0424 01:05:26.666328 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.667213 kubelet[2764]: W0424 01:05:26.667023 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.667213 kubelet[2764]: E0424 01:05:26.667039 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.668202 kubelet[2764]: E0424 01:05:26.668149 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.668202 kubelet[2764]: W0424 01:05:26.668196 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.668265 kubelet[2764]: E0424 01:05:26.668206 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.669091 kubelet[2764]: E0424 01:05:26.668790 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.669091 kubelet[2764]: W0424 01:05:26.668945 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.669091 kubelet[2764]: E0424 01:05:26.668953 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.669445 kubelet[2764]: E0424 01:05:26.669382 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.669445 kubelet[2764]: W0424 01:05:26.669396 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.669445 kubelet[2764]: E0424 01:05:26.669405 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.669980 kubelet[2764]: E0424 01:05:26.669769 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.670580 kubelet[2764]: W0424 01:05:26.670514 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.670580 kubelet[2764]: E0424 01:05:26.670575 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.671668 kubelet[2764]: E0424 01:05:26.671568 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.671668 kubelet[2764]: W0424 01:05:26.671618 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.671668 kubelet[2764]: E0424 01:05:26.671629 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.671749 containerd[1574]: time="2026-04-24T01:05:26.671630761Z" level=info msg="connecting to shim 8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d" address="unix:///run/containerd/s/f1cb5bb3cbdd0929956959c41d111233e330767ca4ddd967e447e94317b31dc1" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:26.672484 kubelet[2764]: E0424 01:05:26.672395 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.672484 kubelet[2764]: W0424 01:05:26.672447 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.672484 kubelet[2764]: E0424 01:05:26.672456 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.673194 kubelet[2764]: E0424 01:05:26.673036 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.673194 kubelet[2764]: W0424 01:05:26.673081 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.673194 kubelet[2764]: E0424 01:05:26.673140 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.674309 kubelet[2764]: E0424 01:05:26.674247 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.674309 kubelet[2764]: W0424 01:05:26.674307 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.674373 kubelet[2764]: E0424 01:05:26.674316 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.676038 kubelet[2764]: E0424 01:05:26.675205 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.676038 kubelet[2764]: W0424 01:05:26.675219 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.676038 kubelet[2764]: E0424 01:05:26.675227 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.676038 kubelet[2764]: E0424 01:05:26.676010 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.676038 kubelet[2764]: W0424 01:05:26.676017 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.676038 kubelet[2764]: E0424 01:05:26.676024 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.676972 kubelet[2764]: E0424 01:05:26.676897 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.676972 kubelet[2764]: W0424 01:05:26.676941 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.676972 kubelet[2764]: E0424 01:05:26.676949 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.679188 kubelet[2764]: E0424 01:05:26.679066 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.679188 kubelet[2764]: W0424 01:05:26.679154 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.679188 kubelet[2764]: E0424 01:05:26.679166 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.680176 kubelet[2764]: E0424 01:05:26.680053 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.680218 kubelet[2764]: W0424 01:05:26.680187 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.680218 kubelet[2764]: E0424 01:05:26.680196 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.681042 kubelet[2764]: E0424 01:05:26.680986 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.681073 kubelet[2764]: W0424 01:05:26.681044 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.681073 kubelet[2764]: E0424 01:05:26.681053 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.685944 kubelet[2764]: E0424 01:05:26.684568 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.685944 kubelet[2764]: W0424 01:05:26.685154 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.685944 kubelet[2764]: E0424 01:05:26.685322 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.688357 kubelet[2764]: E0424 01:05:26.688060 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.691343 kubelet[2764]: W0424 01:05:26.691255 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.691343 kubelet[2764]: E0424 01:05:26.691271 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.693957 kubelet[2764]: E0424 01:05:26.693731 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.693957 kubelet[2764]: W0424 01:05:26.693783 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.693957 kubelet[2764]: E0424 01:05:26.693792 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.694563 kubelet[2764]: E0424 01:05:26.694489 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.694563 kubelet[2764]: W0424 01:05:26.694534 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.694563 kubelet[2764]: E0424 01:05:26.694543 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.695549 kubelet[2764]: E0424 01:05:26.695460 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.695549 kubelet[2764]: W0424 01:05:26.695507 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.695549 kubelet[2764]: E0424 01:05:26.695516 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.696688 kubelet[2764]: E0424 01:05:26.696585 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.696688 kubelet[2764]: W0424 01:05:26.696631 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.696688 kubelet[2764]: E0424 01:05:26.696639 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.699193 kubelet[2764]: E0424 01:05:26.699065 2764 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 01:05:26.699193 kubelet[2764]: W0424 01:05:26.699172 2764 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 01:05:26.699193 kubelet[2764]: E0424 01:05:26.699193 2764 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 01:05:26.709307 systemd[1]: Started cri-containerd-36ff336acf634ff7946a0f6c3bdf50abb39e08ed8281eccceee657332894526d.scope - libcontainer container 36ff336acf634ff7946a0f6c3bdf50abb39e08ed8281eccceee657332894526d. Apr 24 01:05:26.716233 systemd[1]: Started cri-containerd-8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d.scope - libcontainer container 8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d. Apr 24 01:05:26.786887 containerd[1574]: time="2026-04-24T01:05:26.786737203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d64nk,Uid:b529c0a8-b0f7-4de8-bcae-faaa22f11e17,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\"" Apr 24 01:05:26.790777 containerd[1574]: time="2026-04-24T01:05:26.790462896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 24 01:05:26.827035 containerd[1574]: time="2026-04-24T01:05:26.826762296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bf944ff7f-d7vrl,Uid:e3a21dbf-c5de-4b7a-9664-27757e67640c,Namespace:calico-system,Attempt:0,} returns sandbox id \"36ff336acf634ff7946a0f6c3bdf50abb39e08ed8281eccceee657332894526d\"" Apr 24 01:05:26.827682 kubelet[2764]: E0424 01:05:26.827618 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:27.677600 kubelet[2764]: E0424 01:05:27.677468 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:28.558030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173629468.mount: Deactivated successfully. Apr 24 01:05:28.669182 containerd[1574]: time="2026-04-24T01:05:28.668928794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:28.669613 containerd[1574]: time="2026-04-24T01:05:28.669544241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=7563544" Apr 24 01:05:28.670696 containerd[1574]: time="2026-04-24T01:05:28.670625030Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:28.673477 containerd[1574]: time="2026-04-24T01:05:28.673400816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:28.674079 containerd[1574]: time="2026-04-24T01:05:28.674019041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 1.883527286s" Apr 24 01:05:28.674170 containerd[1574]: time="2026-04-24T01:05:28.674085555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 24 01:05:28.676515 containerd[1574]: time="2026-04-24T01:05:28.676449061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 24 01:05:28.681035 containerd[1574]: time="2026-04-24T01:05:28.680688529Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 01:05:28.694028 containerd[1574]: time="2026-04-24T01:05:28.693983104Z" level=info msg="Container 69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:28.696683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135247352.mount: Deactivated successfully. Apr 24 01:05:28.704183 containerd[1574]: time="2026-04-24T01:05:28.704019029Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0\"" Apr 24 01:05:28.705248 containerd[1574]: time="2026-04-24T01:05:28.705161855Z" level=info msg="StartContainer for \"69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0\"" Apr 24 01:05:28.708701 containerd[1574]: time="2026-04-24T01:05:28.708462639Z" level=info msg="connecting to shim 69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0" address="unix:///run/containerd/s/f1cb5bb3cbdd0929956959c41d111233e330767ca4ddd967e447e94317b31dc1" protocol=ttrpc version=3 Apr 24 01:05:28.736442 systemd[1]: Started cri-containerd-69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0.scope - libcontainer container 69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0. Apr 24 01:05:28.819740 containerd[1574]: time="2026-04-24T01:05:28.819567960Z" level=info msg="StartContainer for \"69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0\" returns successfully" Apr 24 01:05:28.832685 systemd[1]: cri-containerd-69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0.scope: Deactivated successfully. Apr 24 01:05:28.840340 containerd[1574]: time="2026-04-24T01:05:28.840258997Z" level=info msg="received container exit event container_id:\"69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0\" id:\"69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0\" pid:3350 exited_at:{seconds:1776992728 nanos:839614394}" Apr 24 01:05:29.493917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69708bc6a2985fa6be91fd02f3a9e21760cc470acd8ec441b313703b797991b0-rootfs.mount: Deactivated successfully. Apr 24 01:05:29.677531 kubelet[2764]: E0424 01:05:29.677355 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:31.678153 kubelet[2764]: E0424 01:05:31.677955 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:33.550023 containerd[1574]: time="2026-04-24T01:05:33.549951188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:33.550950 containerd[1574]: time="2026-04-24T01:05:33.550897998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=32851576" Apr 24 01:05:33.552318 containerd[1574]: time="2026-04-24T01:05:33.552144057Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:33.554340 containerd[1574]: time="2026-04-24T01:05:33.554073809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:33.554537 containerd[1574]: time="2026-04-24T01:05:33.554452579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 4.877942758s" Apr 24 01:05:33.554537 containerd[1574]: time="2026-04-24T01:05:33.554474303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 24 01:05:33.555989 containerd[1574]: time="2026-04-24T01:05:33.555724498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 24 01:05:33.574714 containerd[1574]: time="2026-04-24T01:05:33.574606545Z" level=info msg="CreateContainer within sandbox \"36ff336acf634ff7946a0f6c3bdf50abb39e08ed8281eccceee657332894526d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 01:05:33.588535 containerd[1574]: time="2026-04-24T01:05:33.588501294Z" level=info msg="Container 073daff3b35430e9fb4b065781e5848b2430cba2abbed205b551b661b7364834: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:33.601212 containerd[1574]: time="2026-04-24T01:05:33.601021390Z" level=info msg="CreateContainer within sandbox \"36ff336acf634ff7946a0f6c3bdf50abb39e08ed8281eccceee657332894526d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"073daff3b35430e9fb4b065781e5848b2430cba2abbed205b551b661b7364834\"" Apr 24 01:05:33.601750 containerd[1574]: time="2026-04-24T01:05:33.601697618Z" level=info msg="StartContainer for \"073daff3b35430e9fb4b065781e5848b2430cba2abbed205b551b661b7364834\"" Apr 24 01:05:33.603298 containerd[1574]: time="2026-04-24T01:05:33.603202849Z" level=info msg="connecting to shim 073daff3b35430e9fb4b065781e5848b2430cba2abbed205b551b661b7364834" address="unix:///run/containerd/s/ceba08babab650c3b58f336b44318c4173ff164c3b1629fe0ce2ded652a85608" protocol=ttrpc version=3 Apr 24 01:05:33.651261 systemd[1]: Started cri-containerd-073daff3b35430e9fb4b065781e5848b2430cba2abbed205b551b661b7364834.scope - libcontainer container 073daff3b35430e9fb4b065781e5848b2430cba2abbed205b551b661b7364834. Apr 24 01:05:33.678957 kubelet[2764]: E0424 01:05:33.677785 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:33.734784 containerd[1574]: time="2026-04-24T01:05:33.734399795Z" level=info msg="StartContainer for \"073daff3b35430e9fb4b065781e5848b2430cba2abbed205b551b661b7364834\" returns successfully" Apr 24 01:05:33.810719 kubelet[2764]: E0424 01:05:33.810455 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:33.835225 kubelet[2764]: I0424 01:05:33.834458 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bf944ff7f-d7vrl" podStartSLOduration=1.110253161 podStartE2EDuration="7.834443482s" podCreationTimestamp="2026-04-24 01:05:26 +0000 UTC" firstStartedPulling="2026-04-24 01:05:26.831274736 +0000 UTC m=+17.350024962" lastFinishedPulling="2026-04-24 01:05:33.55546508 +0000 UTC m=+24.074215283" observedRunningTime="2026-04-24 01:05:33.832700428 +0000 UTC m=+24.351450633" watchObservedRunningTime="2026-04-24 01:05:33.834443482 +0000 UTC m=+24.353193760" Apr 24 01:05:34.815560 kubelet[2764]: I0424 01:05:34.815448 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 01:05:34.817041 kubelet[2764]: E0424 01:05:34.816924 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:35.678259 kubelet[2764]: E0424 01:05:35.678039 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:37.681020 kubelet[2764]: E0424 01:05:37.680797 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:39.678676 kubelet[2764]: E0424 01:05:39.678494 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:41.681317 kubelet[2764]: E0424 01:05:41.679172 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:43.489754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115470036.mount: Deactivated successfully. Apr 24 01:05:43.678540 kubelet[2764]: E0424 01:05:43.678271 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:43.739350 containerd[1574]: time="2026-04-24T01:05:43.739243454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 24 01:05:43.743793 containerd[1574]: time="2026-04-24T01:05:43.743192579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:43.765735 containerd[1574]: time="2026-04-24T01:05:43.765579908Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:43.790052 containerd[1574]: time="2026-04-24T01:05:43.789699683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:43.790568 containerd[1574]: time="2026-04-24T01:05:43.790483937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 10.234735731s" Apr 24 01:05:43.790568 containerd[1574]: time="2026-04-24T01:05:43.790561357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 24 01:05:43.800423 containerd[1574]: time="2026-04-24T01:05:43.800339723Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 01:05:43.812458 containerd[1574]: time="2026-04-24T01:05:43.812355252Z" level=info msg="Container 94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:43.842694 containerd[1574]: time="2026-04-24T01:05:43.842632309Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d\"" Apr 24 01:05:43.843262 containerd[1574]: time="2026-04-24T01:05:43.843230730Z" level=info msg="StartContainer for \"94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d\"" Apr 24 01:05:43.845268 containerd[1574]: time="2026-04-24T01:05:43.845228948Z" level=info msg="connecting to shim 94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d" address="unix:///run/containerd/s/f1cb5bb3cbdd0929956959c41d111233e330767ca4ddd967e447e94317b31dc1" protocol=ttrpc version=3 Apr 24 01:05:43.875178 systemd[1]: Started cri-containerd-94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d.scope - libcontainer container 94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d. Apr 24 01:05:43.964092 containerd[1574]: time="2026-04-24T01:05:43.964058608Z" level=info msg="StartContainer for \"94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d\" returns successfully" Apr 24 01:05:44.074658 systemd[1]: cri-containerd-94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d.scope: Deactivated successfully. Apr 24 01:05:44.092210 containerd[1574]: time="2026-04-24T01:05:44.092033297Z" level=info msg="received container exit event container_id:\"94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d\" id:\"94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d\" pid:3451 exited_at:{seconds:1776992744 nanos:76187095}" Apr 24 01:05:44.489443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94652f6128ccd945970ccb907443e82244828bec5425180b70206a4b81ffe35d-rootfs.mount: Deactivated successfully. Apr 24 01:05:44.871756 containerd[1574]: time="2026-04-24T01:05:44.871669838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 24 01:05:45.677912 kubelet[2764]: E0424 01:05:45.677646 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:47.677364 kubelet[2764]: E0424 01:05:47.677297 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:49.607014 containerd[1574]: time="2026-04-24T01:05:49.606411536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:49.607772 containerd[1574]: time="2026-04-24T01:05:49.607645537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 24 01:05:49.620729 containerd[1574]: time="2026-04-24T01:05:49.620667454Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:49.623346 containerd[1574]: time="2026-04-24T01:05:49.623238856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:49.623760 containerd[1574]: time="2026-04-24T01:05:49.623687401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 4.751939537s" Apr 24 01:05:49.623788 containerd[1574]: time="2026-04-24T01:05:49.623771085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 24 01:05:49.631647 containerd[1574]: time="2026-04-24T01:05:49.631550451Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 01:05:49.640467 containerd[1574]: time="2026-04-24T01:05:49.640383909Z" level=info msg="Container f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:49.655274 containerd[1574]: time="2026-04-24T01:05:49.655196998Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822\"" Apr 24 01:05:49.656934 containerd[1574]: time="2026-04-24T01:05:49.656421724Z" level=info msg="StartContainer for \"f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822\"" Apr 24 01:05:49.658501 containerd[1574]: time="2026-04-24T01:05:49.658477172Z" level=info msg="connecting to shim f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822" address="unix:///run/containerd/s/f1cb5bb3cbdd0929956959c41d111233e330767ca4ddd967e447e94317b31dc1" protocol=ttrpc version=3 Apr 24 01:05:49.678711 kubelet[2764]: E0424 01:05:49.677678 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vf49l" podUID="604703a5-be99-4439-bd36-95d174df6415" Apr 24 01:05:49.697527 systemd[1]: Started cri-containerd-f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822.scope - libcontainer container f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822. Apr 24 01:05:49.809409 containerd[1574]: time="2026-04-24T01:05:49.809320707Z" level=info msg="StartContainer for \"f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822\" returns successfully" Apr 24 01:05:50.449257 systemd[1]: cri-containerd-f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822.scope: Deactivated successfully. Apr 24 01:05:50.449981 systemd[1]: cri-containerd-f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822.scope: Consumed 810ms CPU time, 176.3M memory peak, 4.6M read from disk, 173.7M written to disk. Apr 24 01:05:50.455549 containerd[1574]: time="2026-04-24T01:05:50.455460135Z" level=info msg="received container exit event container_id:\"f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822\" id:\"f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822\" pid:3514 exited_at:{seconds:1776992750 nanos:454895924}" Apr 24 01:05:50.480910 kubelet[2764]: I0424 01:05:50.480761 2764 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 24 01:05:50.514779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1d9231676c8527e30305caf869f87edf025199f28a930385eef623969104822-rootfs.mount: Deactivated successfully. Apr 24 01:05:50.593253 systemd[1]: Created slice kubepods-besteffort-podcf53fe2b_64ea_44b8_853b_7d3bdd84d27f.slice - libcontainer container kubepods-besteffort-podcf53fe2b_64ea_44b8_853b_7d3bdd84d27f.slice. Apr 24 01:05:50.606116 systemd[1]: Created slice kubepods-burstable-pod78165a30_d5a7_424b_9f0e_710651ab74af.slice - libcontainer container kubepods-burstable-pod78165a30_d5a7_424b_9f0e_710651ab74af.slice. Apr 24 01:05:50.620469 systemd[1]: Created slice kubepods-besteffort-pod61836c8f_b551_4c04_821a_b13ef514c3fa.slice - libcontainer container kubepods-besteffort-pod61836c8f_b551_4c04_821a_b13ef514c3fa.slice. Apr 24 01:05:50.627786 systemd[1]: Created slice kubepods-besteffort-pod2b814dfb_7916_43d7_abc7_0edf90c3adb2.slice - libcontainer container kubepods-besteffort-pod2b814dfb_7916_43d7_abc7_0edf90c3adb2.slice. Apr 24 01:05:50.634000 systemd[1]: Created slice kubepods-besteffort-pod91009b28_24e1_4731_ac81_f373176fe1b8.slice - libcontainer container kubepods-besteffort-pod91009b28_24e1_4731_ac81_f373176fe1b8.slice. Apr 24 01:05:50.643325 systemd[1]: Created slice kubepods-burstable-pod0e6906e9_2e64_46c2_a033_cbbcdce0502c.slice - libcontainer container kubepods-burstable-pod0e6906e9_2e64_46c2_a033_cbbcdce0502c.slice. Apr 24 01:05:50.657648 systemd[1]: Created slice kubepods-besteffort-poddf5c6779_93cd_40af_af1c_5229570d975a.slice - libcontainer container kubepods-besteffort-poddf5c6779_93cd_40af_af1c_5229570d975a.slice. Apr 24 01:05:50.697401 kubelet[2764]: I0424 01:05:50.697292 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e6906e9-2e64-46c2-a033-cbbcdce0502c-config-volume\") pod \"coredns-66bc5c9577-78nwd\" (UID: \"0e6906e9-2e64-46c2-a033-cbbcdce0502c\") " pod="kube-system/coredns-66bc5c9577-78nwd" Apr 24 01:05:50.697401 kubelet[2764]: I0424 01:05:50.697367 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-backend-key-pair\") pod \"whisker-8d498c46f-qr8cx\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " pod="calico-system/whisker-8d498c46f-qr8cx" Apr 24 01:05:50.697401 kubelet[2764]: I0424 01:05:50.697399 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528hg\" (UniqueName: \"kubernetes.io/projected/61836c8f-b551-4c04-821a-b13ef514c3fa-kube-api-access-528hg\") pod \"whisker-8d498c46f-qr8cx\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " pod="calico-system/whisker-8d498c46f-qr8cx" Apr 24 01:05:50.697401 kubelet[2764]: I0424 01:05:50.697412 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p86lt\" (UniqueName: \"kubernetes.io/projected/0e6906e9-2e64-46c2-a033-cbbcdce0502c-kube-api-access-p86lt\") pod \"coredns-66bc5c9577-78nwd\" (UID: \"0e6906e9-2e64-46c2-a033-cbbcdce0502c\") " pod="kube-system/coredns-66bc5c9577-78nwd" Apr 24 01:05:50.697401 kubelet[2764]: I0424 01:05:50.697424 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-ca-bundle\") pod \"whisker-8d498c46f-qr8cx\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " pod="calico-system/whisker-8d498c46f-qr8cx" Apr 24 01:05:50.701022 kubelet[2764]: I0424 01:05:50.697436 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdd2f\" (UniqueName: \"kubernetes.io/projected/cf53fe2b-64ea-44b8-853b-7d3bdd84d27f-kube-api-access-kdd2f\") pod \"calico-apiserver-6d5795bdfc-frbkv\" (UID: \"cf53fe2b-64ea-44b8-853b-7d3bdd84d27f\") " pod="calico-system/calico-apiserver-6d5795bdfc-frbkv" Apr 24 01:05:50.701022 kubelet[2764]: I0424 01:05:50.697451 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91009b28-24e1-4731-ac81-f373176fe1b8-calico-apiserver-certs\") pod \"calico-apiserver-6d5795bdfc-tvpd8\" (UID: \"91009b28-24e1-4731-ac81-f373176fe1b8\") " pod="calico-system/calico-apiserver-6d5795bdfc-tvpd8" Apr 24 01:05:50.701022 kubelet[2764]: I0424 01:05:50.697462 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df5c6779-93cd-40af-af1c-5229570d975a-goldmane-ca-bundle\") pod \"goldmane-6b4b7f4496-vf9tk\" (UID: \"df5c6779-93cd-40af-af1c-5229570d975a\") " pod="calico-system/goldmane-6b4b7f4496-vf9tk" Apr 24 01:05:50.701022 kubelet[2764]: I0424 01:05:50.697473 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/df5c6779-93cd-40af-af1c-5229570d975a-goldmane-key-pair\") pod \"goldmane-6b4b7f4496-vf9tk\" (UID: \"df5c6779-93cd-40af-af1c-5229570d975a\") " pod="calico-system/goldmane-6b4b7f4496-vf9tk" Apr 24 01:05:50.701022 kubelet[2764]: I0424 01:05:50.697777 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf53fe2b-64ea-44b8-853b-7d3bdd84d27f-calico-apiserver-certs\") pod \"calico-apiserver-6d5795bdfc-frbkv\" (UID: \"cf53fe2b-64ea-44b8-853b-7d3bdd84d27f\") " pod="calico-system/calico-apiserver-6d5795bdfc-frbkv" Apr 24 01:05:50.701119 kubelet[2764]: I0424 01:05:50.697804 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwzd\" (UniqueName: \"kubernetes.io/projected/91009b28-24e1-4731-ac81-f373176fe1b8-kube-api-access-rmwzd\") pod \"calico-apiserver-6d5795bdfc-tvpd8\" (UID: \"91009b28-24e1-4731-ac81-f373176fe1b8\") " pod="calico-system/calico-apiserver-6d5795bdfc-tvpd8" Apr 24 01:05:50.701119 kubelet[2764]: I0424 01:05:50.697902 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b814dfb-7916-43d7-abc7-0edf90c3adb2-tigera-ca-bundle\") pod \"calico-kube-controllers-566c9b6b64-c8nbk\" (UID: \"2b814dfb-7916-43d7-abc7-0edf90c3adb2\") " pod="calico-system/calico-kube-controllers-566c9b6b64-c8nbk" Apr 24 01:05:50.701119 kubelet[2764]: I0424 01:05:50.697921 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzwvt\" (UniqueName: \"kubernetes.io/projected/78165a30-d5a7-424b-9f0e-710651ab74af-kube-api-access-zzwvt\") pod \"coredns-66bc5c9577-fwbts\" (UID: \"78165a30-d5a7-424b-9f0e-710651ab74af\") " pod="kube-system/coredns-66bc5c9577-fwbts" Apr 24 01:05:50.701119 kubelet[2764]: I0424 01:05:50.697935 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-nginx-config\") pod \"whisker-8d498c46f-qr8cx\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " pod="calico-system/whisker-8d498c46f-qr8cx" Apr 24 01:05:50.701119 kubelet[2764]: I0424 01:05:50.697951 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtvl\" (UniqueName: \"kubernetes.io/projected/2b814dfb-7916-43d7-abc7-0edf90c3adb2-kube-api-access-trtvl\") pod \"calico-kube-controllers-566c9b6b64-c8nbk\" (UID: \"2b814dfb-7916-43d7-abc7-0edf90c3adb2\") " pod="calico-system/calico-kube-controllers-566c9b6b64-c8nbk" Apr 24 01:05:50.701252 kubelet[2764]: I0424 01:05:50.697964 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvgxd\" (UniqueName: \"kubernetes.io/projected/df5c6779-93cd-40af-af1c-5229570d975a-kube-api-access-vvgxd\") pod \"goldmane-6b4b7f4496-vf9tk\" (UID: \"df5c6779-93cd-40af-af1c-5229570d975a\") " pod="calico-system/goldmane-6b4b7f4496-vf9tk" Apr 24 01:05:50.701252 kubelet[2764]: I0424 01:05:50.697976 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78165a30-d5a7-424b-9f0e-710651ab74af-config-volume\") pod \"coredns-66bc5c9577-fwbts\" (UID: \"78165a30-d5a7-424b-9f0e-710651ab74af\") " pod="kube-system/coredns-66bc5c9577-fwbts" Apr 24 01:05:50.701252 kubelet[2764]: I0424 01:05:50.697988 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df5c6779-93cd-40af-af1c-5229570d975a-config\") pod \"goldmane-6b4b7f4496-vf9tk\" (UID: \"df5c6779-93cd-40af-af1c-5229570d975a\") " pod="calico-system/goldmane-6b4b7f4496-vf9tk" Apr 24 01:05:50.908040 containerd[1574]: time="2026-04-24T01:05:50.907948839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-frbkv,Uid:cf53fe2b-64ea-44b8-853b-7d3bdd84d27f,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:50.917993 kubelet[2764]: E0424 01:05:50.917936 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:50.942049 containerd[1574]: time="2026-04-24T01:05:50.919668165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwbts,Uid:78165a30-d5a7-424b-9f0e-710651ab74af,Namespace:kube-system,Attempt:0,}" Apr 24 01:05:50.943008 containerd[1574]: time="2026-04-24T01:05:50.942602612Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 01:05:50.945962 containerd[1574]: time="2026-04-24T01:05:50.929717205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8d498c46f-qr8cx,Uid:61836c8f-b551-4c04-821a-b13ef514c3fa,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:50.946400 containerd[1574]: time="2026-04-24T01:05:50.946241185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-tvpd8,Uid:91009b28-24e1-4731-ac81-f373176fe1b8,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:50.947195 containerd[1574]: time="2026-04-24T01:05:50.946936778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566c9b6b64-c8nbk,Uid:2b814dfb-7916-43d7-abc7-0edf90c3adb2,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:50.952548 kubelet[2764]: E0424 01:05:50.952334 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:50.953744 containerd[1574]: time="2026-04-24T01:05:50.953726565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-78nwd,Uid:0e6906e9-2e64-46c2-a033-cbbcdce0502c,Namespace:kube-system,Attempt:0,}" Apr 24 01:05:50.973107 containerd[1574]: time="2026-04-24T01:05:50.973078601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-vf9tk,Uid:df5c6779-93cd-40af-af1c-5229570d975a,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:51.011488 containerd[1574]: time="2026-04-24T01:05:51.011340432Z" level=info msg="Container 8c8b6d591fc23f232ecb6144999b38439acf7ea0c49d966577a5fd8c50ec82b4: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:51.031916 containerd[1574]: time="2026-04-24T01:05:51.029706606Z" level=info msg="CreateContainer within sandbox \"8e37e1bb78cbc551ca58f9eee317f690e4d0d9f7388e757efb52259e92196b0d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8c8b6d591fc23f232ecb6144999b38439acf7ea0c49d966577a5fd8c50ec82b4\"" Apr 24 01:05:51.034247 containerd[1574]: time="2026-04-24T01:05:51.034133107Z" level=info msg="StartContainer for \"8c8b6d591fc23f232ecb6144999b38439acf7ea0c49d966577a5fd8c50ec82b4\"" Apr 24 01:05:51.053267 containerd[1574]: time="2026-04-24T01:05:51.053009961Z" level=info msg="connecting to shim 8c8b6d591fc23f232ecb6144999b38439acf7ea0c49d966577a5fd8c50ec82b4" address="unix:///run/containerd/s/f1cb5bb3cbdd0929956959c41d111233e330767ca4ddd967e447e94317b31dc1" protocol=ttrpc version=3 Apr 24 01:05:51.087390 systemd[1]: Started cri-containerd-8c8b6d591fc23f232ecb6144999b38439acf7ea0c49d966577a5fd8c50ec82b4.scope - libcontainer container 8c8b6d591fc23f232ecb6144999b38439acf7ea0c49d966577a5fd8c50ec82b4. Apr 24 01:05:51.210705 containerd[1574]: time="2026-04-24T01:05:51.210347723Z" level=error msg="Failed to destroy network for sandbox \"21c07ac664a30f5e3bf86c061cc6162399725f8cebbee59eccdd522c432def78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.220320 containerd[1574]: time="2026-04-24T01:05:51.220283010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwbts,Uid:78165a30-d5a7-424b-9f0e-710651ab74af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c07ac664a30f5e3bf86c061cc6162399725f8cebbee59eccdd522c432def78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.231196 containerd[1574]: time="2026-04-24T01:05:51.229358031Z" level=error msg="Failed to destroy network for sandbox \"4d3948e4f4ba782054f57f6a5461b3ee69119f02752765bc7f02256bf10c0082\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.231561 containerd[1574]: time="2026-04-24T01:05:51.231335016Z" level=error msg="Failed to destroy network for sandbox \"2898d0949708df4836a6ddf0eb4ebbed568720ba9c8d2232e6fe455d59cbd55e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.241430 containerd[1574]: time="2026-04-24T01:05:51.241291239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-frbkv,Uid:cf53fe2b-64ea-44b8-853b-7d3bdd84d27f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3948e4f4ba782054f57f6a5461b3ee69119f02752765bc7f02256bf10c0082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.245757 containerd[1574]: time="2026-04-24T01:05:51.245713654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-78nwd,Uid:0e6906e9-2e64-46c2-a033-cbbcdce0502c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2898d0949708df4836a6ddf0eb4ebbed568720ba9c8d2232e6fe455d59cbd55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.246790 containerd[1574]: time="2026-04-24T01:05:51.246455731Z" level=error msg="Failed to destroy network for sandbox \"ee212533313391135a7c2d73f29dfa05db35b9235193b9e86133b383f37802c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.249340 containerd[1574]: time="2026-04-24T01:05:51.249320719Z" level=error msg="Failed to destroy network for sandbox \"764a840bf3ddd9d965806477eba8de4f20e7c446fab692df6dd9e899a15b0ae0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.251915 containerd[1574]: time="2026-04-24T01:05:51.251218720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8d498c46f-qr8cx,Uid:61836c8f-b551-4c04-821a-b13ef514c3fa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee212533313391135a7c2d73f29dfa05db35b9235193b9e86133b383f37802c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.251915 containerd[1574]: time="2026-04-24T01:05:51.251404951Z" level=info msg="StartContainer for \"8c8b6d591fc23f232ecb6144999b38439acf7ea0c49d966577a5fd8c50ec82b4\" returns successfully" Apr 24 01:05:51.253985 containerd[1574]: time="2026-04-24T01:05:51.253960968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-tvpd8,Uid:91009b28-24e1-4731-ac81-f373176fe1b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"764a840bf3ddd9d965806477eba8de4f20e7c446fab692df6dd9e899a15b0ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.255299 kubelet[2764]: E0424 01:05:51.255244 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c07ac664a30f5e3bf86c061cc6162399725f8cebbee59eccdd522c432def78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.255889 kubelet[2764]: E0424 01:05:51.255566 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"764a840bf3ddd9d965806477eba8de4f20e7c446fab692df6dd9e899a15b0ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.256031 kubelet[2764]: E0424 01:05:51.255588 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2898d0949708df4836a6ddf0eb4ebbed568720ba9c8d2232e6fe455d59cbd55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.256031 kubelet[2764]: E0424 01:05:51.255608 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3948e4f4ba782054f57f6a5461b3ee69119f02752765bc7f02256bf10c0082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.256031 kubelet[2764]: E0424 01:05:51.255726 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee212533313391135a7c2d73f29dfa05db35b9235193b9e86133b383f37802c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.256373 kubelet[2764]: E0424 01:05:51.256359 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"764a840bf3ddd9d965806477eba8de4f20e7c446fab692df6dd9e899a15b0ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5795bdfc-tvpd8" Apr 24 01:05:51.256570 kubelet[2764]: E0424 01:05:51.256420 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"764a840bf3ddd9d965806477eba8de4f20e7c446fab692df6dd9e899a15b0ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5795bdfc-tvpd8" Apr 24 01:05:51.256792 kubelet[2764]: E0424 01:05:51.256760 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2898d0949708df4836a6ddf0eb4ebbed568720ba9c8d2232e6fe455d59cbd55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-78nwd" Apr 24 01:05:51.257275 kubelet[2764]: E0424 01:05:51.257067 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2898d0949708df4836a6ddf0eb4ebbed568720ba9c8d2232e6fe455d59cbd55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-78nwd" Apr 24 01:05:51.257275 kubelet[2764]: E0424 01:05:51.257135 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-78nwd_kube-system(0e6906e9-2e64-46c2-a033-cbbcdce0502c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-78nwd_kube-system(0e6906e9-2e64-46c2-a033-cbbcdce0502c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2898d0949708df4836a6ddf0eb4ebbed568720ba9c8d2232e6fe455d59cbd55e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-78nwd" podUID="0e6906e9-2e64-46c2-a033-cbbcdce0502c" Apr 24 01:05:51.257275 kubelet[2764]: E0424 01:05:51.256915 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3948e4f4ba782054f57f6a5461b3ee69119f02752765bc7f02256bf10c0082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5795bdfc-frbkv" Apr 24 01:05:51.257482 kubelet[2764]: E0424 01:05:51.257210 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d3948e4f4ba782054f57f6a5461b3ee69119f02752765bc7f02256bf10c0082\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6d5795bdfc-frbkv" Apr 24 01:05:51.257482 kubelet[2764]: E0424 01:05:51.257229 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5795bdfc-frbkv_calico-system(cf53fe2b-64ea-44b8-853b-7d3bdd84d27f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5795bdfc-frbkv_calico-system(cf53fe2b-64ea-44b8-853b-7d3bdd84d27f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d3948e4f4ba782054f57f6a5461b3ee69119f02752765bc7f02256bf10c0082\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d5795bdfc-frbkv" podUID="cf53fe2b-64ea-44b8-853b-7d3bdd84d27f" Apr 24 01:05:51.257482 kubelet[2764]: E0424 01:05:51.257051 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee212533313391135a7c2d73f29dfa05db35b9235193b9e86133b383f37802c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8d498c46f-qr8cx" Apr 24 01:05:51.257619 kubelet[2764]: E0424 01:05:51.257244 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee212533313391135a7c2d73f29dfa05db35b9235193b9e86133b383f37802c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8d498c46f-qr8cx" Apr 24 01:05:51.257619 kubelet[2764]: E0424 01:05:51.257261 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8d498c46f-qr8cx_calico-system(61836c8f-b551-4c04-821a-b13ef514c3fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8d498c46f-qr8cx_calico-system(61836c8f-b551-4c04-821a-b13ef514c3fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee212533313391135a7c2d73f29dfa05db35b9235193b9e86133b383f37802c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8d498c46f-qr8cx" podUID="61836c8f-b551-4c04-821a-b13ef514c3fa" Apr 24 01:05:51.257926 kubelet[2764]: E0424 01:05:51.257754 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c07ac664a30f5e3bf86c061cc6162399725f8cebbee59eccdd522c432def78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fwbts" Apr 24 01:05:51.257926 kubelet[2764]: E0424 01:05:51.257769 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c07ac664a30f5e3bf86c061cc6162399725f8cebbee59eccdd522c432def78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-fwbts" Apr 24 01:05:51.257926 kubelet[2764]: E0424 01:05:51.257792 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-fwbts_kube-system(78165a30-d5a7-424b-9f0e-710651ab74af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-fwbts_kube-system(78165a30-d5a7-424b-9f0e-710651ab74af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21c07ac664a30f5e3bf86c061cc6162399725f8cebbee59eccdd522c432def78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-fwbts" podUID="78165a30-d5a7-424b-9f0e-710651ab74af" Apr 24 01:05:51.258080 kubelet[2764]: E0424 01:05:51.257908 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5795bdfc-tvpd8_calico-system(91009b28-24e1-4731-ac81-f373176fe1b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5795bdfc-tvpd8_calico-system(91009b28-24e1-4731-ac81-f373176fe1b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"764a840bf3ddd9d965806477eba8de4f20e7c446fab692df6dd9e899a15b0ae0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6d5795bdfc-tvpd8" podUID="91009b28-24e1-4731-ac81-f373176fe1b8" Apr 24 01:05:51.263959 containerd[1574]: time="2026-04-24T01:05:51.263630574Z" level=error msg="Failed to destroy network for sandbox \"d2da9d61e34827357aa21fb720567b998e9d968054b31fef15c6c78819c53ebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.271445 containerd[1574]: time="2026-04-24T01:05:51.271381176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-vf9tk,Uid:df5c6779-93cd-40af-af1c-5229570d975a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2da9d61e34827357aa21fb720567b998e9d968054b31fef15c6c78819c53ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.275801 containerd[1574]: time="2026-04-24T01:05:51.275643843Z" level=error msg="Failed to destroy network for sandbox \"7d672738b5d96ef4e161abe21d8963f6de0db64f83d85f7dc4dcd29405b6485a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.276521 kubelet[2764]: E0424 01:05:51.275928 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2da9d61e34827357aa21fb720567b998e9d968054b31fef15c6c78819c53ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.276521 kubelet[2764]: E0424 01:05:51.275998 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2da9d61e34827357aa21fb720567b998e9d968054b31fef15c6c78819c53ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-vf9tk" Apr 24 01:05:51.276521 kubelet[2764]: E0424 01:05:51.276019 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2da9d61e34827357aa21fb720567b998e9d968054b31fef15c6c78819c53ebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-6b4b7f4496-vf9tk" Apr 24 01:05:51.276601 kubelet[2764]: E0424 01:05:51.276099 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-6b4b7f4496-vf9tk_calico-system(df5c6779-93cd-40af-af1c-5229570d975a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-6b4b7f4496-vf9tk_calico-system(df5c6779-93cd-40af-af1c-5229570d975a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2da9d61e34827357aa21fb720567b998e9d968054b31fef15c6c78819c53ebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-6b4b7f4496-vf9tk" podUID="df5c6779-93cd-40af-af1c-5229570d975a" Apr 24 01:05:51.279121 containerd[1574]: time="2026-04-24T01:05:51.279057846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566c9b6b64-c8nbk,Uid:2b814dfb-7916-43d7-abc7-0edf90c3adb2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d672738b5d96ef4e161abe21d8963f6de0db64f83d85f7dc4dcd29405b6485a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.280367 kubelet[2764]: E0424 01:05:51.280071 2764 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d672738b5d96ef4e161abe21d8963f6de0db64f83d85f7dc4dcd29405b6485a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 01:05:51.282970 kubelet[2764]: E0424 01:05:51.280450 2764 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d672738b5d96ef4e161abe21d8963f6de0db64f83d85f7dc4dcd29405b6485a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566c9b6b64-c8nbk" Apr 24 01:05:51.282970 kubelet[2764]: E0424 01:05:51.282680 2764 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d672738b5d96ef4e161abe21d8963f6de0db64f83d85f7dc4dcd29405b6485a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566c9b6b64-c8nbk" Apr 24 01:05:51.284338 kubelet[2764]: E0424 01:05:51.283323 2764 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-566c9b6b64-c8nbk_calico-system(2b814dfb-7916-43d7-abc7-0edf90c3adb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-566c9b6b64-c8nbk_calico-system(2b814dfb-7916-43d7-abc7-0edf90c3adb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d672738b5d96ef4e161abe21d8963f6de0db64f83d85f7dc4dcd29405b6485a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-566c9b6b64-c8nbk" podUID="2b814dfb-7916-43d7-abc7-0edf90c3adb2" Apr 24 01:05:51.634016 kernel: hrtimer: interrupt took 5619067 ns Apr 24 01:05:51.695275 systemd[1]: Created slice kubepods-besteffort-pod604703a5_be99_4439_bd36_95d174df6415.slice - libcontainer container kubepods-besteffort-pod604703a5_be99_4439_bd36_95d174df6415.slice. Apr 24 01:05:51.716221 containerd[1574]: time="2026-04-24T01:05:51.715740065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vf49l,Uid:604703a5-be99-4439-bd36-95d174df6415,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:52.029664 kubelet[2764]: I0424 01:05:52.028956 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d64nk" podStartSLOduration=3.193202855 podStartE2EDuration="26.028940426s" podCreationTimestamp="2026-04-24 01:05:26 +0000 UTC" firstStartedPulling="2026-04-24 01:05:26.789242814 +0000 UTC m=+17.307993021" lastFinishedPulling="2026-04-24 01:05:49.624980384 +0000 UTC m=+40.143730592" observedRunningTime="2026-04-24 01:05:52.028481493 +0000 UTC m=+42.547231707" watchObservedRunningTime="2026-04-24 01:05:52.028940426 +0000 UTC m=+42.547690636" Apr 24 01:05:52.036540 kubelet[2764]: I0424 01:05:52.036519 2764 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-528hg\" (UniqueName: \"kubernetes.io/projected/61836c8f-b551-4c04-821a-b13ef514c3fa-kube-api-access-528hg\") pod \"61836c8f-b551-4c04-821a-b13ef514c3fa\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " Apr 24 01:05:52.037962 kubelet[2764]: I0424 01:05:52.037192 2764 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-nginx-config\") pod \"61836c8f-b551-4c04-821a-b13ef514c3fa\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " Apr 24 01:05:52.037962 kubelet[2764]: I0424 01:05:52.037226 2764 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-backend-key-pair\") pod \"61836c8f-b551-4c04-821a-b13ef514c3fa\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " Apr 24 01:05:52.037962 kubelet[2764]: I0424 01:05:52.037246 2764 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-ca-bundle\") pod \"61836c8f-b551-4c04-821a-b13ef514c3fa\" (UID: \"61836c8f-b551-4c04-821a-b13ef514c3fa\") " Apr 24 01:05:52.037962 kubelet[2764]: I0424 01:05:52.037504 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "61836c8f-b551-4c04-821a-b13ef514c3fa" (UID: "61836c8f-b551-4c04-821a-b13ef514c3fa"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 01:05:52.037962 kubelet[2764]: I0424 01:05:52.037681 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "61836c8f-b551-4c04-821a-b13ef514c3fa" (UID: "61836c8f-b551-4c04-821a-b13ef514c3fa"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 01:05:52.046686 systemd[1]: var-lib-kubelet-pods-61836c8f\x2db551\x2d4c04\x2d821a\x2db13ef514c3fa-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 01:05:52.048553 kubelet[2764]: I0424 01:05:52.047742 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "61836c8f-b551-4c04-821a-b13ef514c3fa" (UID: "61836c8f-b551-4c04-821a-b13ef514c3fa"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 01:05:52.049789 kubelet[2764]: I0424 01:05:52.049747 2764 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61836c8f-b551-4c04-821a-b13ef514c3fa-kube-api-access-528hg" (OuterVolumeSpecName: "kube-api-access-528hg") pod "61836c8f-b551-4c04-821a-b13ef514c3fa" (UID: "61836c8f-b551-4c04-821a-b13ef514c3fa"). InnerVolumeSpecName "kube-api-access-528hg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 01:05:52.052187 systemd[1]: var-lib-kubelet-pods-61836c8f\x2db551\x2d4c04\x2d821a\x2db13ef514c3fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d528hg.mount: Deactivated successfully. Apr 24 01:05:52.138335 kubelet[2764]: I0424 01:05:52.138247 2764 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 24 01:05:52.138335 kubelet[2764]: I0424 01:05:52.138318 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 24 01:05:52.138335 kubelet[2764]: I0424 01:05:52.138326 2764 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61836c8f-b551-4c04-821a-b13ef514c3fa-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 24 01:05:52.138335 kubelet[2764]: I0424 01:05:52.138333 2764 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-528hg\" (UniqueName: \"kubernetes.io/projected/61836c8f-b551-4c04-821a-b13ef514c3fa-kube-api-access-528hg\") on node \"localhost\" DevicePath \"\"" Apr 24 01:05:52.944635 systemd[1]: Removed slice kubepods-besteffort-pod61836c8f_b551_4c04_821a_b13ef514c3fa.slice - libcontainer container kubepods-besteffort-pod61836c8f_b551_4c04_821a_b13ef514c3fa.slice. Apr 24 01:05:53.074446 systemd[1]: Created slice kubepods-besteffort-pod20ef41f2_0a6f_48da_b829_19d849e8aa7b.slice - libcontainer container kubepods-besteffort-pod20ef41f2_0a6f_48da_b829_19d849e8aa7b.slice. Apr 24 01:05:53.150916 kubelet[2764]: I0424 01:05:53.149809 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/20ef41f2-0a6f-48da-b829-19d849e8aa7b-whisker-backend-key-pair\") pod \"whisker-cf6965dc7-9g8mf\" (UID: \"20ef41f2-0a6f-48da-b829-19d849e8aa7b\") " pod="calico-system/whisker-cf6965dc7-9g8mf" Apr 24 01:05:53.150916 kubelet[2764]: I0424 01:05:53.150641 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/20ef41f2-0a6f-48da-b829-19d849e8aa7b-nginx-config\") pod \"whisker-cf6965dc7-9g8mf\" (UID: \"20ef41f2-0a6f-48da-b829-19d849e8aa7b\") " pod="calico-system/whisker-cf6965dc7-9g8mf" Apr 24 01:05:53.152006 kubelet[2764]: I0424 01:05:53.151438 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20ef41f2-0a6f-48da-b829-19d849e8aa7b-whisker-ca-bundle\") pod \"whisker-cf6965dc7-9g8mf\" (UID: \"20ef41f2-0a6f-48da-b829-19d849e8aa7b\") " pod="calico-system/whisker-cf6965dc7-9g8mf" Apr 24 01:05:53.152006 kubelet[2764]: I0424 01:05:53.151939 2764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9nhv\" (UniqueName: \"kubernetes.io/projected/20ef41f2-0a6f-48da-b829-19d849e8aa7b-kube-api-access-s9nhv\") pod \"whisker-cf6965dc7-9g8mf\" (UID: \"20ef41f2-0a6f-48da-b829-19d849e8aa7b\") " pod="calico-system/whisker-cf6965dc7-9g8mf" Apr 24 01:05:53.164398 systemd-networkd[1478]: cali0270620207b: Link UP Apr 24 01:05:53.165379 systemd-networkd[1478]: cali0270620207b: Gained carrier Apr 24 01:05:53.211962 containerd[1574]: 2026-04-24 01:05:51.792 [ERROR][3822] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 01:05:53.211962 containerd[1574]: 2026-04-24 01:05:51.824 [INFO][3822] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vf49l-eth0 csi-node-driver- calico-system 604703a5-be99-4439-bd36-95d174df6415 756 0 2026-04-24 01:05:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:95f96f7df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vf49l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0270620207b [] [] }} ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-" Apr 24 01:05:53.211962 containerd[1574]: 2026-04-24 01:05:51.824 [INFO][3822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-eth0" Apr 24 01:05:53.211962 containerd[1574]: 2026-04-24 01:05:51.894 [INFO][3846] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" HandleID="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Workload="localhost-k8s-csi--node--driver--vf49l-eth0" Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:51.908 [INFO][3846] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" HandleID="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Workload="localhost-k8s-csi--node--driver--vf49l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vf49l", "timestamp":"2026-04-24 01:05:51.894678279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000384dc0)} Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:51.908 [INFO][3846] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:51.909 [INFO][3846] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:51.909 [INFO][3846] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:51.912 [INFO][3846] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" host="localhost" Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:51.979 [INFO][3846] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:52.012 [INFO][3846] ipam/ipam.go 1965: Failed to create global IPAM config; another node got there first. Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:53.032 [INFO][3846] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:53.044 [INFO][3846] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Apr 24 01:05:53.212507 containerd[1574]: 2026-04-24 01:05:53.057 [INFO][3846] ipam/ipam.go 588: Found unclaimed block in 10.579646ms host="localhost" subnet=192.168.88.128/26 Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.057 [INFO][3846] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.097 [INFO][3846] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.110 [INFO][3846] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.110 [INFO][3846] ipam/ipam_block_reader_writer.go 202: Existing affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.111 [INFO][3846] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.120 [INFO][3846] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.121 [INFO][3846] ipam/ipam.go 623: Block '192.168.88.128/26' has 63 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.121 [INFO][3846] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" host="localhost" Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.125 [INFO][3846] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40 Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.131 [INFO][3846] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" host="localhost" Apr 24 01:05:53.212766 containerd[1574]: 2026-04-24 01:05:53.138 [INFO][3846] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" host="localhost" Apr 24 01:05:53.217417 containerd[1574]: 2026-04-24 01:05:53.138 [INFO][3846] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" host="localhost" Apr 24 01:05:53.217417 containerd[1574]: 2026-04-24 01:05:53.138 [INFO][3846] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:05:53.217417 containerd[1574]: 2026-04-24 01:05:53.138 [INFO][3846] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" HandleID="k8s-pod-network.7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Workload="localhost-k8s-csi--node--driver--vf49l-eth0" Apr 24 01:05:53.217472 containerd[1574]: 2026-04-24 01:05:53.147 [INFO][3822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vf49l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"604703a5-be99-4439-bd36-95d174df6415", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vf49l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0270620207b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:05:53.217599 containerd[1574]: 2026-04-24 01:05:53.147 [INFO][3822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-eth0" Apr 24 01:05:53.217599 containerd[1574]: 2026-04-24 01:05:53.147 [INFO][3822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0270620207b ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-eth0" Apr 24 01:05:53.217599 containerd[1574]: 2026-04-24 01:05:53.167 [INFO][3822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-eth0" Apr 24 01:05:53.217649 containerd[1574]: 2026-04-24 01:05:53.168 [INFO][3822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vf49l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"604703a5-be99-4439-bd36-95d174df6415", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"95f96f7df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40", Pod:"csi-node-driver-vf49l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0270620207b", MAC:"ea:5e:89:69:43:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:05:53.217756 containerd[1574]: 2026-04-24 01:05:53.206 [INFO][3822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" Namespace="calico-system" Pod="csi-node-driver-vf49l" WorkloadEndpoint="localhost-k8s-csi--node--driver--vf49l-eth0" Apr 24 01:05:53.317251 containerd[1574]: time="2026-04-24T01:05:53.315972710Z" level=info msg="connecting to shim 7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40" address="unix:///run/containerd/s/ae12e2259da30a08c2e64301138305fac668f9635be7984b6645c5104db32b7c" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:53.397142 systemd[1]: Started cri-containerd-7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40.scope - libcontainer container 7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40. Apr 24 01:05:53.399142 containerd[1574]: time="2026-04-24T01:05:53.398629944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf6965dc7-9g8mf,Uid:20ef41f2-0a6f-48da-b829-19d849e8aa7b,Namespace:calico-system,Attempt:0,}" Apr 24 01:05:53.479224 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:05:53.577325 containerd[1574]: time="2026-04-24T01:05:53.577212048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vf49l,Uid:604703a5-be99-4439-bd36-95d174df6415,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40\"" Apr 24 01:05:53.591592 systemd[1]: Started sshd@9-10.0.0.5:22-10.0.0.1:56838.service - OpenSSH per-connection server daemon (10.0.0.1:56838). Apr 24 01:05:53.592984 containerd[1574]: time="2026-04-24T01:05:53.591955955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 24 01:05:53.693676 kubelet[2764]: I0424 01:05:53.693637 2764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61836c8f-b551-4c04-821a-b13ef514c3fa" path="/var/lib/kubelet/pods/61836c8f-b551-4c04-821a-b13ef514c3fa/volumes" Apr 24 01:05:53.712959 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 56838 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:05:53.714060 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:05:53.716681 systemd-networkd[1478]: cali7dfe7d02458: Link UP Apr 24 01:05:53.718948 systemd-networkd[1478]: cali7dfe7d02458: Gained carrier Apr 24 01:05:53.721199 systemd-logind[1560]: New session 10 of user core. Apr 24 01:05:53.726968 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 01:05:53.743744 containerd[1574]: 2026-04-24 01:05:53.477 [ERROR][4059] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 01:05:53.743744 containerd[1574]: 2026-04-24 01:05:53.507 [INFO][4059] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--cf6965dc7--9g8mf-eth0 whisker-cf6965dc7- calico-system 20ef41f2-0a6f-48da-b829-19d849e8aa7b 977 0 2026-04-24 01:05:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cf6965dc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-cf6965dc7-9g8mf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7dfe7d02458 [] [] }} ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-" Apr 24 01:05:53.743744 containerd[1574]: 2026-04-24 01:05:53.507 [INFO][4059] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" Apr 24 01:05:53.743744 containerd[1574]: 2026-04-24 01:05:53.616 [INFO][4089] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" HandleID="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Workload="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.639 [INFO][4089] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" HandleID="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Workload="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-cf6965dc7-9g8mf", "timestamp":"2026-04-24 01:05:53.616677278 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002ecf20)} Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.639 [INFO][4089] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.639 [INFO][4089] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.639 [INFO][4089] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.643 [INFO][4089] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" host="localhost" Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.661 [INFO][4089] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.672 [INFO][4089] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.680 [INFO][4089] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.694 [INFO][4089] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:05:53.744451 containerd[1574]: 2026-04-24 01:05:53.694 [INFO][4089] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" host="localhost" Apr 24 01:05:53.744689 containerd[1574]: 2026-04-24 01:05:53.697 [INFO][4089] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1 Apr 24 01:05:53.744689 containerd[1574]: 2026-04-24 01:05:53.703 [INFO][4089] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" host="localhost" Apr 24 01:05:53.744689 containerd[1574]: 2026-04-24 01:05:53.711 [INFO][4089] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" host="localhost" Apr 24 01:05:53.744689 containerd[1574]: 2026-04-24 01:05:53.711 [INFO][4089] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" host="localhost" Apr 24 01:05:53.744689 containerd[1574]: 2026-04-24 01:05:53.711 [INFO][4089] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:05:53.744689 containerd[1574]: 2026-04-24 01:05:53.711 [INFO][4089] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" HandleID="k8s-pod-network.4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Workload="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" Apr 24 01:05:53.744784 containerd[1574]: 2026-04-24 01:05:53.714 [INFO][4059] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cf6965dc7--9g8mf-eth0", GenerateName:"whisker-cf6965dc7-", Namespace:"calico-system", SelfLink:"", UID:"20ef41f2-0a6f-48da-b829-19d849e8aa7b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cf6965dc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-cf6965dc7-9g8mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7dfe7d02458", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:05:53.744784 containerd[1574]: 2026-04-24 01:05:53.714 [INFO][4059] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" Apr 24 01:05:53.745685 containerd[1574]: 2026-04-24 01:05:53.714 [INFO][4059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dfe7d02458 ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" Apr 24 01:05:53.745685 containerd[1574]: 2026-04-24 01:05:53.719 [INFO][4059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" Apr 24 01:05:53.746198 containerd[1574]: 2026-04-24 01:05:53.721 [INFO][4059] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--cf6965dc7--9g8mf-eth0", GenerateName:"whisker-cf6965dc7-", Namespace:"calico-system", SelfLink:"", UID:"20ef41f2-0a6f-48da-b829-19d849e8aa7b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cf6965dc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1", Pod:"whisker-cf6965dc7-9g8mf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7dfe7d02458", MAC:"0a:f6:28:7e:d7:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:05:53.746320 containerd[1574]: 2026-04-24 01:05:53.740 [INFO][4059] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" Namespace="calico-system" Pod="whisker-cf6965dc7-9g8mf" WorkloadEndpoint="localhost-k8s-whisker--cf6965dc7--9g8mf-eth0" Apr 24 01:05:53.775786 containerd[1574]: time="2026-04-24T01:05:53.775452576Z" level=info msg="connecting to shim 4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1" address="unix:///run/containerd/s/1c6309b36c483cd4dd22577bfa8c4d71326ae183a4a9a9eeb53fe2fc5210ae40" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:05:53.824137 systemd[1]: Started cri-containerd-4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1.scope - libcontainer container 4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1. Apr 24 01:05:53.842751 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:05:53.868027 sshd[4111]: Connection closed by 10.0.0.1 port 56838 Apr 24 01:05:53.869491 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Apr 24 01:05:53.874097 systemd[1]: sshd@9-10.0.0.5:22-10.0.0.1:56838.service: Deactivated successfully. Apr 24 01:05:53.877775 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 01:05:53.881222 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Apr 24 01:05:53.883407 systemd-logind[1560]: Removed session 10. Apr 24 01:05:53.896950 containerd[1574]: time="2026-04-24T01:05:53.896757578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf6965dc7-9g8mf,Uid:20ef41f2-0a6f-48da-b829-19d849e8aa7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1\"" Apr 24 01:05:54.479135 systemd-networkd[1478]: cali0270620207b: Gained IPv6LL Apr 24 01:05:54.991383 systemd-networkd[1478]: cali7dfe7d02458: Gained IPv6LL Apr 24 01:05:55.405418 kubelet[2764]: I0424 01:05:55.404657 2764 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 01:05:55.405418 kubelet[2764]: E0424 01:05:55.405273 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:55.629313 containerd[1574]: time="2026-04-24T01:05:55.629137064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:55.630315 containerd[1574]: time="2026-04-24T01:05:55.630201807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 24 01:05:55.631353 containerd[1574]: time="2026-04-24T01:05:55.631202381Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:55.635647 containerd[1574]: time="2026-04-24T01:05:55.635592058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:55.636010 containerd[1574]: time="2026-04-24T01:05:55.635984616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 2.043905138s" Apr 24 01:05:55.636010 containerd[1574]: time="2026-04-24T01:05:55.636006039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 24 01:05:55.638299 containerd[1574]: time="2026-04-24T01:05:55.638072616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 24 01:05:55.642426 containerd[1574]: time="2026-04-24T01:05:55.642369283Z" level=info msg="CreateContainer within sandbox \"7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 01:05:55.655003 containerd[1574]: time="2026-04-24T01:05:55.654902207Z" level=info msg="Container 02b93ec62d524affa53d4b7acdcf1d6f9309edb50bd3b76e173e32ee87af2b6c: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:55.673794 containerd[1574]: time="2026-04-24T01:05:55.669621741Z" level=info msg="CreateContainer within sandbox \"7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"02b93ec62d524affa53d4b7acdcf1d6f9309edb50bd3b76e173e32ee87af2b6c\"" Apr 24 01:05:55.680113 containerd[1574]: time="2026-04-24T01:05:55.679110283Z" level=info msg="StartContainer for \"02b93ec62d524affa53d4b7acdcf1d6f9309edb50bd3b76e173e32ee87af2b6c\"" Apr 24 01:05:55.687563 containerd[1574]: time="2026-04-24T01:05:55.687362704Z" level=info msg="connecting to shim 02b93ec62d524affa53d4b7acdcf1d6f9309edb50bd3b76e173e32ee87af2b6c" address="unix:///run/containerd/s/ae12e2259da30a08c2e64301138305fac668f9635be7984b6645c5104db32b7c" protocol=ttrpc version=3 Apr 24 01:05:55.717075 systemd[1]: Started cri-containerd-02b93ec62d524affa53d4b7acdcf1d6f9309edb50bd3b76e173e32ee87af2b6c.scope - libcontainer container 02b93ec62d524affa53d4b7acdcf1d6f9309edb50bd3b76e173e32ee87af2b6c. Apr 24 01:05:55.830249 containerd[1574]: time="2026-04-24T01:05:55.829948683Z" level=info msg="StartContainer for \"02b93ec62d524affa53d4b7acdcf1d6f9309edb50bd3b76e173e32ee87af2b6c\" returns successfully" Apr 24 01:05:55.962442 kubelet[2764]: E0424 01:05:55.961764 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:05:56.665097 systemd-networkd[1478]: vxlan.calico: Link UP Apr 24 01:05:56.665103 systemd-networkd[1478]: vxlan.calico: Gained carrier Apr 24 01:05:57.575714 containerd[1574]: time="2026-04-24T01:05:57.575604268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:57.576566 containerd[1574]: time="2026-04-24T01:05:57.576496882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 24 01:05:57.577978 containerd[1574]: time="2026-04-24T01:05:57.577773998Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:57.579616 containerd[1574]: time="2026-04-24T01:05:57.579547515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:57.580767 containerd[1574]: time="2026-04-24T01:05:57.580671507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 1.942578878s" Apr 24 01:05:57.580767 containerd[1574]: time="2026-04-24T01:05:57.580735009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 24 01:05:57.584483 containerd[1574]: time="2026-04-24T01:05:57.584052931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 24 01:05:57.590560 containerd[1574]: time="2026-04-24T01:05:57.590354574Z" level=info msg="CreateContainer within sandbox \"4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 01:05:57.601780 containerd[1574]: time="2026-04-24T01:05:57.601710728Z" level=info msg="Container 606b4053c921475de4fad6fbbe0c855a54e98c2677fdaef92558b4e5330e85f9: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:57.613944 containerd[1574]: time="2026-04-24T01:05:57.613718221Z" level=info msg="CreateContainer within sandbox \"4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"606b4053c921475de4fad6fbbe0c855a54e98c2677fdaef92558b4e5330e85f9\"" Apr 24 01:05:57.618295 containerd[1574]: time="2026-04-24T01:05:57.618133212Z" level=info msg="StartContainer for \"606b4053c921475de4fad6fbbe0c855a54e98c2677fdaef92558b4e5330e85f9\"" Apr 24 01:05:57.624150 containerd[1574]: time="2026-04-24T01:05:57.624125383Z" level=info msg="connecting to shim 606b4053c921475de4fad6fbbe0c855a54e98c2677fdaef92558b4e5330e85f9" address="unix:///run/containerd/s/1c6309b36c483cd4dd22577bfa8c4d71326ae183a4a9a9eeb53fe2fc5210ae40" protocol=ttrpc version=3 Apr 24 01:05:57.657725 systemd[1]: Started cri-containerd-606b4053c921475de4fad6fbbe0c855a54e98c2677fdaef92558b4e5330e85f9.scope - libcontainer container 606b4053c921475de4fad6fbbe0c855a54e98c2677fdaef92558b4e5330e85f9. Apr 24 01:05:57.730688 containerd[1574]: time="2026-04-24T01:05:57.730504045Z" level=info msg="StartContainer for \"606b4053c921475de4fad6fbbe0c855a54e98c2677fdaef92558b4e5330e85f9\" returns successfully" Apr 24 01:05:58.154325 systemd-networkd[1478]: vxlan.calico: Gained IPv6LL Apr 24 01:05:58.888095 systemd[1]: Started sshd@10-10.0.0.5:22-10.0.0.1:33406.service - OpenSSH per-connection server daemon (10.0.0.1:33406). Apr 24 01:05:58.988110 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 33406 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:05:58.989793 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:05:58.997287 systemd-logind[1560]: New session 11 of user core. Apr 24 01:05:59.003083 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 01:05:59.129269 sshd[4456]: Connection closed by 10.0.0.1 port 33406 Apr 24 01:05:59.129538 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Apr 24 01:05:59.133424 systemd[1]: sshd@10-10.0.0.5:22-10.0.0.1:33406.service: Deactivated successfully. Apr 24 01:05:59.135417 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 01:05:59.136785 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Apr 24 01:05:59.139009 systemd-logind[1560]: Removed session 11. Apr 24 01:05:59.689651 containerd[1574]: time="2026-04-24T01:05:59.689229739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:59.691387 containerd[1574]: time="2026-04-24T01:05:59.690396156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 24 01:05:59.692414 containerd[1574]: time="2026-04-24T01:05:59.692347534Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:59.694922 containerd[1574]: time="2026-04-24T01:05:59.694720811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:05:59.695159 containerd[1574]: time="2026-04-24T01:05:59.695085855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 2.110002259s" Apr 24 01:05:59.695233 containerd[1574]: time="2026-04-24T01:05:59.695213300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 24 01:05:59.697143 containerd[1574]: time="2026-04-24T01:05:59.697093141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 24 01:05:59.702226 containerd[1574]: time="2026-04-24T01:05:59.702104338Z" level=info msg="CreateContainer within sandbox \"7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 01:05:59.711109 containerd[1574]: time="2026-04-24T01:05:59.711031161Z" level=info msg="Container 640fcc3ff422f03c17c3bdb23e66e0cc155abd0a5b9ba3e51c06118dec9e93af: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:05:59.720755 containerd[1574]: time="2026-04-24T01:05:59.720678691Z" level=info msg="CreateContainer within sandbox \"7c3d581f3185b91fe53efa55b5bbcdc28e03f57386366767f0402030d2c9be40\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"640fcc3ff422f03c17c3bdb23e66e0cc155abd0a5b9ba3e51c06118dec9e93af\"" Apr 24 01:05:59.721900 containerd[1574]: time="2026-04-24T01:05:59.721585357Z" level=info msg="StartContainer for \"640fcc3ff422f03c17c3bdb23e66e0cc155abd0a5b9ba3e51c06118dec9e93af\"" Apr 24 01:05:59.722942 containerd[1574]: time="2026-04-24T01:05:59.722716012Z" level=info msg="connecting to shim 640fcc3ff422f03c17c3bdb23e66e0cc155abd0a5b9ba3e51c06118dec9e93af" address="unix:///run/containerd/s/ae12e2259da30a08c2e64301138305fac668f9635be7984b6645c5104db32b7c" protocol=ttrpc version=3 Apr 24 01:05:59.755508 systemd[1]: Started cri-containerd-640fcc3ff422f03c17c3bdb23e66e0cc155abd0a5b9ba3e51c06118dec9e93af.scope - libcontainer container 640fcc3ff422f03c17c3bdb23e66e0cc155abd0a5b9ba3e51c06118dec9e93af. Apr 24 01:05:59.841039 containerd[1574]: time="2026-04-24T01:05:59.840689955Z" level=info msg="StartContainer for \"640fcc3ff422f03c17c3bdb23e66e0cc155abd0a5b9ba3e51c06118dec9e93af\" returns successfully" Apr 24 01:06:00.753984 kubelet[2764]: I0424 01:06:00.753685 2764 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 01:06:00.756085 kubelet[2764]: I0424 01:06:00.755932 2764 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 01:06:01.686421 containerd[1574]: time="2026-04-24T01:06:01.686256254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566c9b6b64-c8nbk,Uid:2b814dfb-7916-43d7-abc7-0edf90c3adb2,Namespace:calico-system,Attempt:0,}" Apr 24 01:06:01.907803 systemd-networkd[1478]: cali8d642c3080a: Link UP Apr 24 01:06:01.909262 systemd-networkd[1478]: cali8d642c3080a: Gained carrier Apr 24 01:06:01.920934 kubelet[2764]: I0424 01:06:01.920661 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vf49l" podStartSLOduration=29.810085685 podStartE2EDuration="35.920648252s" podCreationTimestamp="2026-04-24 01:05:26 +0000 UTC" firstStartedPulling="2026-04-24 01:05:53.586350903 +0000 UTC m=+44.105101105" lastFinishedPulling="2026-04-24 01:05:59.69691347 +0000 UTC m=+50.215663672" observedRunningTime="2026-04-24 01:06:00.013317774 +0000 UTC m=+50.532067980" watchObservedRunningTime="2026-04-24 01:06:01.920648252 +0000 UTC m=+52.439398475" Apr 24 01:06:01.928132 containerd[1574]: 2026-04-24 01:06:01.766 [INFO][4514] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0 calico-kube-controllers-566c9b6b64- calico-system 2b814dfb-7916-43d7-abc7-0edf90c3adb2 917 0 2026-04-24 01:05:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:566c9b6b64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-566c9b6b64-c8nbk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8d642c3080a [] [] }} ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-" Apr 24 01:06:01.928132 containerd[1574]: 2026-04-24 01:06:01.767 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" Apr 24 01:06:01.928132 containerd[1574]: 2026-04-24 01:06:01.823 [INFO][4528] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" HandleID="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Workload="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.834 [INFO][4528] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" HandleID="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Workload="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-566c9b6b64-c8nbk", "timestamp":"2026-04-24 01:06:01.823734702 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005cef20)} Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.834 [INFO][4528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.834 [INFO][4528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.834 [INFO][4528] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.840 [INFO][4528] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" host="localhost" Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.853 [INFO][4528] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.862 [INFO][4528] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.865 [INFO][4528] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:01.928371 containerd[1574]: 2026-04-24 01:06:01.870 [INFO][4528] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:01.928630 containerd[1574]: 2026-04-24 01:06:01.870 [INFO][4528] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" host="localhost" Apr 24 01:06:01.928630 containerd[1574]: 2026-04-24 01:06:01.873 [INFO][4528] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17 Apr 24 01:06:01.928630 containerd[1574]: 2026-04-24 01:06:01.880 [INFO][4528] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" host="localhost" Apr 24 01:06:01.928630 containerd[1574]: 2026-04-24 01:06:01.897 [INFO][4528] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" host="localhost" Apr 24 01:06:01.928630 containerd[1574]: 2026-04-24 01:06:01.897 [INFO][4528] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" host="localhost" Apr 24 01:06:01.928630 containerd[1574]: 2026-04-24 01:06:01.898 [INFO][4528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:06:01.928630 containerd[1574]: 2026-04-24 01:06:01.898 [INFO][4528] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" HandleID="k8s-pod-network.da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Workload="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" Apr 24 01:06:01.928731 containerd[1574]: 2026-04-24 01:06:01.900 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0", GenerateName:"calico-kube-controllers-566c9b6b64-", Namespace:"calico-system", SelfLink:"", UID:"2b814dfb-7916-43d7-abc7-0edf90c3adb2", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566c9b6b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-566c9b6b64-c8nbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d642c3080a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:01.928927 containerd[1574]: 2026-04-24 01:06:01.900 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" Apr 24 01:06:01.928927 containerd[1574]: 2026-04-24 01:06:01.900 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d642c3080a ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" Apr 24 01:06:01.928927 containerd[1574]: 2026-04-24 01:06:01.908 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" Apr 24 01:06:01.929016 containerd[1574]: 2026-04-24 01:06:01.910 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0", GenerateName:"calico-kube-controllers-566c9b6b64-", Namespace:"calico-system", SelfLink:"", UID:"2b814dfb-7916-43d7-abc7-0edf90c3adb2", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566c9b6b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17", Pod:"calico-kube-controllers-566c9b6b64-c8nbk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d642c3080a", MAC:"46:39:25:48:ab:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:01.929123 containerd[1574]: 2026-04-24 01:06:01.925 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" Namespace="calico-system" Pod="calico-kube-controllers-566c9b6b64-c8nbk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566c9b6b64--c8nbk-eth0" Apr 24 01:06:01.967647 containerd[1574]: time="2026-04-24T01:06:01.967321323Z" level=info msg="connecting to shim da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17" address="unix:///run/containerd/s/be7f7a0c8cc271a4cab5d3fa69577fdc39fb2fc716e69c705e3b04dc53e571ea" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:06:02.010085 systemd[1]: Started cri-containerd-da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17.scope - libcontainer container da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17. Apr 24 01:06:02.034592 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:06:02.101231 containerd[1574]: time="2026-04-24T01:06:02.101002847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566c9b6b64-c8nbk,Uid:2b814dfb-7916-43d7-abc7-0edf90c3adb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17\"" Apr 24 01:06:02.123453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932300456.mount: Deactivated successfully. Apr 24 01:06:02.152220 containerd[1574]: time="2026-04-24T01:06:02.151999189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:02.153408 containerd[1574]: time="2026-04-24T01:06:02.153329248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 24 01:06:02.154445 containerd[1574]: time="2026-04-24T01:06:02.154340420Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:02.156552 containerd[1574]: time="2026-04-24T01:06:02.156448199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:02.157027 containerd[1574]: time="2026-04-24T01:06:02.156792410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 2.459642449s" Apr 24 01:06:02.157132 containerd[1574]: time="2026-04-24T01:06:02.157033243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 24 01:06:02.159312 containerd[1574]: time="2026-04-24T01:06:02.159277790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 24 01:06:02.164078 containerd[1574]: time="2026-04-24T01:06:02.163932187Z" level=info msg="CreateContainer within sandbox \"4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 01:06:02.173339 containerd[1574]: time="2026-04-24T01:06:02.172986471Z" level=info msg="Container 59e9c62131c8011ebfdb88fe1e2c8010b435dbc76c45ef2b300270528fdf86b3: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:06:02.185220 containerd[1574]: time="2026-04-24T01:06:02.184987694Z" level=info msg="CreateContainer within sandbox \"4d137666093ed53a5874880735e47b7ae9e00cbbc1ff123c383fa811fed6b3a1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"59e9c62131c8011ebfdb88fe1e2c8010b435dbc76c45ef2b300270528fdf86b3\"" Apr 24 01:06:02.187574 containerd[1574]: time="2026-04-24T01:06:02.187082778Z" level=info msg="StartContainer for \"59e9c62131c8011ebfdb88fe1e2c8010b435dbc76c45ef2b300270528fdf86b3\"" Apr 24 01:06:02.189222 containerd[1574]: time="2026-04-24T01:06:02.189111591Z" level=info msg="connecting to shim 59e9c62131c8011ebfdb88fe1e2c8010b435dbc76c45ef2b300270528fdf86b3" address="unix:///run/containerd/s/1c6309b36c483cd4dd22577bfa8c4d71326ae183a4a9a9eeb53fe2fc5210ae40" protocol=ttrpc version=3 Apr 24 01:06:02.218994 systemd[1]: Started cri-containerd-59e9c62131c8011ebfdb88fe1e2c8010b435dbc76c45ef2b300270528fdf86b3.scope - libcontainer container 59e9c62131c8011ebfdb88fe1e2c8010b435dbc76c45ef2b300270528fdf86b3. Apr 24 01:06:02.291223 containerd[1574]: time="2026-04-24T01:06:02.290732087Z" level=info msg="StartContainer for \"59e9c62131c8011ebfdb88fe1e2c8010b435dbc76c45ef2b300270528fdf86b3\" returns successfully" Apr 24 01:06:02.685066 kubelet[2764]: E0424 01:06:02.684128 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:02.687700 containerd[1574]: time="2026-04-24T01:06:02.687550843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-78nwd,Uid:0e6906e9-2e64-46c2-a033-cbbcdce0502c,Namespace:kube-system,Attempt:0,}" Apr 24 01:06:02.688997 containerd[1574]: time="2026-04-24T01:06:02.688724133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-vf9tk,Uid:df5c6779-93cd-40af-af1c-5229570d975a,Namespace:calico-system,Attempt:0,}" Apr 24 01:06:02.888251 systemd-networkd[1478]: cali69d7e27534a: Link UP Apr 24 01:06:02.889029 systemd-networkd[1478]: cali69d7e27534a: Gained carrier Apr 24 01:06:02.904772 containerd[1574]: 2026-04-24 01:06:02.757 [INFO][4631] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--78nwd-eth0 coredns-66bc5c9577- kube-system 0e6906e9-2e64-46c2-a033-cbbcdce0502c 920 0 2026-04-24 01:05:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-78nwd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69d7e27534a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-" Apr 24 01:06:02.904772 containerd[1574]: 2026-04-24 01:06:02.757 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" Apr 24 01:06:02.904772 containerd[1574]: 2026-04-24 01:06:02.819 [INFO][4658] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" HandleID="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Workload="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.833 [INFO][4658] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" HandleID="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Workload="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-78nwd", "timestamp":"2026-04-24 01:06:02.819569948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ff340)} Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.833 [INFO][4658] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.834 [INFO][4658] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.834 [INFO][4658] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.838 [INFO][4658] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" host="localhost" Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.844 [INFO][4658] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.854 [INFO][4658] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.857 [INFO][4658] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.862 [INFO][4658] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:02.905077 containerd[1574]: 2026-04-24 01:06:02.862 [INFO][4658] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" host="localhost" Apr 24 01:06:02.905363 containerd[1574]: 2026-04-24 01:06:02.865 [INFO][4658] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93 Apr 24 01:06:02.905363 containerd[1574]: 2026-04-24 01:06:02.873 [INFO][4658] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" host="localhost" Apr 24 01:06:02.905363 containerd[1574]: 2026-04-24 01:06:02.880 [INFO][4658] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" host="localhost" Apr 24 01:06:02.905363 containerd[1574]: 2026-04-24 01:06:02.881 [INFO][4658] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" host="localhost" Apr 24 01:06:02.905363 containerd[1574]: 2026-04-24 01:06:02.881 [INFO][4658] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:06:02.905363 containerd[1574]: 2026-04-24 01:06:02.881 [INFO][4658] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" HandleID="k8s-pod-network.5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Workload="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" Apr 24 01:06:02.905453 containerd[1574]: 2026-04-24 01:06:02.883 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--78nwd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0e6906e9-2e64-46c2-a033-cbbcdce0502c", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-78nwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69d7e27534a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:02.905453 containerd[1574]: 2026-04-24 01:06:02.883 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" Apr 24 01:06:02.905453 containerd[1574]: 2026-04-24 01:06:02.883 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69d7e27534a ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" Apr 24 01:06:02.905453 containerd[1574]: 2026-04-24 01:06:02.888 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" Apr 24 01:06:02.905453 containerd[1574]: 2026-04-24 01:06:02.888 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--78nwd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0e6906e9-2e64-46c2-a033-cbbcdce0502c", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93", Pod:"coredns-66bc5c9577-78nwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69d7e27534a", MAC:"fa:2b:54:59:ff:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:02.905453 containerd[1574]: 2026-04-24 01:06:02.900 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" Namespace="kube-system" Pod="coredns-66bc5c9577-78nwd" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--78nwd-eth0" Apr 24 01:06:02.937139 containerd[1574]: time="2026-04-24T01:06:02.936637185Z" level=info msg="connecting to shim 5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93" address="unix:///run/containerd/s/6e6090d8b8a92666f648ed092d68088cf7793b2c7f18b69dd56febafc00a2c2e" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:06:02.977128 systemd[1]: Started cri-containerd-5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93.scope - libcontainer container 5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93. Apr 24 01:06:03.001703 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:06:03.022454 systemd-networkd[1478]: cali46faad020e5: Link UP Apr 24 01:06:03.024487 systemd-networkd[1478]: cali46faad020e5: Gained carrier Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.766 [INFO][4633] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0 goldmane-6b4b7f4496- calico-system df5c6779-93cd-40af-af1c-5229570d975a 921 0 2026-04-24 01:05:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:6b4b7f4496 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-6b4b7f4496-vf9tk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali46faad020e5 [] [] }} ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.767 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.823 [INFO][4663] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" HandleID="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Workload="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.836 [INFO][4663] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" HandleID="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Workload="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000306150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-6b4b7f4496-vf9tk", "timestamp":"2026-04-24 01:06:02.823779974 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001982c0)} Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.836 [INFO][4663] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.881 [INFO][4663] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.881 [INFO][4663] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.942 [INFO][4663] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.953 [INFO][4663] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.961 [INFO][4663] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.965 [INFO][4663] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.970 [INFO][4663] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.970 [INFO][4663] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.973 [INFO][4663] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577 Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:02.987 [INFO][4663] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:03.002 [INFO][4663] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:03.004 [INFO][4663] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" host="localhost" Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:03.004 [INFO][4663] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:06:03.054021 containerd[1574]: 2026-04-24 01:06:03.005 [INFO][4663] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" HandleID="k8s-pod-network.399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Workload="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" Apr 24 01:06:03.054781 containerd[1574]: 2026-04-24 01:06:03.013 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"df5c6779-93cd-40af-af1c-5229570d975a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-6b4b7f4496-vf9tk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46faad020e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:03.054781 containerd[1574]: 2026-04-24 01:06:03.013 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" Apr 24 01:06:03.054781 containerd[1574]: 2026-04-24 01:06:03.013 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46faad020e5 ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" Apr 24 01:06:03.054781 containerd[1574]: 2026-04-24 01:06:03.025 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" Apr 24 01:06:03.054781 containerd[1574]: 2026-04-24 01:06:03.031 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0", GenerateName:"goldmane-6b4b7f4496-", Namespace:"calico-system", SelfLink:"", UID:"df5c6779-93cd-40af-af1c-5229570d975a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"6b4b7f4496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577", Pod:"goldmane-6b4b7f4496-vf9tk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali46faad020e5", MAC:"ea:1f:fc:83:79:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:03.054781 containerd[1574]: 2026-04-24 01:06:03.049 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" Namespace="calico-system" Pod="goldmane-6b4b7f4496-vf9tk" WorkloadEndpoint="localhost-k8s-goldmane--6b4b7f4496--vf9tk-eth0" Apr 24 01:06:03.090927 containerd[1574]: time="2026-04-24T01:06:03.090233867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-78nwd,Uid:0e6906e9-2e64-46c2-a033-cbbcdce0502c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93\"" Apr 24 01:06:03.095889 kubelet[2764]: E0424 01:06:03.095785 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:03.104056 containerd[1574]: time="2026-04-24T01:06:03.103811359Z" level=info msg="CreateContainer within sandbox \"5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 01:06:03.123349 containerd[1574]: time="2026-04-24T01:06:03.123105464Z" level=info msg="connecting to shim 399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577" address="unix:///run/containerd/s/89b10fa2ef49d717cd6614fc8abb3c11e15a1d360177c8697e2e1e0cd48b1068" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:06:03.140097 containerd[1574]: time="2026-04-24T01:06:03.140066255Z" level=info msg="Container 19b27b9516259b2077f0576c330eae5b5e53349efa4114432dfe9fc12f21acce: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:06:03.158753 containerd[1574]: time="2026-04-24T01:06:03.158574908Z" level=info msg="CreateContainer within sandbox \"5f455e093227578a976a2c87e7402a46643104c40013ec2eac81f03882dc0b93\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19b27b9516259b2077f0576c330eae5b5e53349efa4114432dfe9fc12f21acce\"" Apr 24 01:06:03.160232 containerd[1574]: time="2026-04-24T01:06:03.160166048Z" level=info msg="StartContainer for \"19b27b9516259b2077f0576c330eae5b5e53349efa4114432dfe9fc12f21acce\"" Apr 24 01:06:03.167774 systemd[1]: Started cri-containerd-399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577.scope - libcontainer container 399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577. Apr 24 01:06:03.172295 containerd[1574]: time="2026-04-24T01:06:03.171996848Z" level=info msg="connecting to shim 19b27b9516259b2077f0576c330eae5b5e53349efa4114432dfe9fc12f21acce" address="unix:///run/containerd/s/6e6090d8b8a92666f648ed092d68088cf7793b2c7f18b69dd56febafc00a2c2e" protocol=ttrpc version=3 Apr 24 01:06:03.209601 systemd[1]: Started cri-containerd-19b27b9516259b2077f0576c330eae5b5e53349efa4114432dfe9fc12f21acce.scope - libcontainer container 19b27b9516259b2077f0576c330eae5b5e53349efa4114432dfe9fc12f21acce. Apr 24 01:06:03.216388 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:06:03.247533 systemd-networkd[1478]: cali8d642c3080a: Gained IPv6LL Apr 24 01:06:03.291972 containerd[1574]: time="2026-04-24T01:06:03.291739249Z" level=info msg="StartContainer for \"19b27b9516259b2077f0576c330eae5b5e53349efa4114432dfe9fc12f21acce\" returns successfully" Apr 24 01:06:03.308911 containerd[1574]: time="2026-04-24T01:06:03.308281987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-6b4b7f4496-vf9tk,Uid:df5c6779-93cd-40af-af1c-5229570d975a,Namespace:calico-system,Attempt:0,} returns sandbox id \"399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577\"" Apr 24 01:06:03.950435 systemd-networkd[1478]: cali69d7e27534a: Gained IPv6LL Apr 24 01:06:04.046810 kubelet[2764]: E0424 01:06:04.046605 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:04.068576 kubelet[2764]: I0424 01:06:04.067764 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cf6965dc7-9g8mf" podStartSLOduration=2.808084764 podStartE2EDuration="11.0677228s" podCreationTimestamp="2026-04-24 01:05:53 +0000 UTC" firstStartedPulling="2026-04-24 01:05:53.899299007 +0000 UTC m=+44.418049210" lastFinishedPulling="2026-04-24 01:06:02.158937043 +0000 UTC m=+52.677687246" observedRunningTime="2026-04-24 01:06:03.04814804 +0000 UTC m=+53.566898254" watchObservedRunningTime="2026-04-24 01:06:04.0677228 +0000 UTC m=+54.586473036" Apr 24 01:06:04.070444 kubelet[2764]: I0424 01:06:04.069329 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-78nwd" podStartSLOduration=49.069318259 podStartE2EDuration="49.069318259s" podCreationTimestamp="2026-04-24 01:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 01:06:04.067674311 +0000 UTC m=+54.586424518" watchObservedRunningTime="2026-04-24 01:06:04.069318259 +0000 UTC m=+54.588068473" Apr 24 01:06:04.142394 systemd[1]: Started sshd@11-10.0.0.5:22-10.0.0.1:33414.service - OpenSSH per-connection server daemon (10.0.0.1:33414). Apr 24 01:06:04.207393 systemd-networkd[1478]: cali46faad020e5: Gained IPv6LL Apr 24 01:06:04.215475 sshd[4826]: Accepted publickey for core from 10.0.0.1 port 33414 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:04.217455 sshd-session[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:04.231680 systemd-logind[1560]: New session 12 of user core. Apr 24 01:06:04.238445 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 01:06:04.364404 sshd[4829]: Connection closed by 10.0.0.1 port 33414 Apr 24 01:06:04.364797 sshd-session[4826]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:04.369520 systemd[1]: sshd@11-10.0.0.5:22-10.0.0.1:33414.service: Deactivated successfully. Apr 24 01:06:04.372321 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 01:06:04.373470 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Apr 24 01:06:04.375398 systemd-logind[1560]: Removed session 12. Apr 24 01:06:04.683364 kubelet[2764]: E0424 01:06:04.683097 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:04.685249 containerd[1574]: time="2026-04-24T01:06:04.685048524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwbts,Uid:78165a30-d5a7-424b-9f0e-710651ab74af,Namespace:kube-system,Attempt:0,}" Apr 24 01:06:04.687769 containerd[1574]: time="2026-04-24T01:06:04.687748218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-tvpd8,Uid:91009b28-24e1-4731-ac81-f373176fe1b8,Namespace:calico-system,Attempt:0,}" Apr 24 01:06:04.689409 containerd[1574]: time="2026-04-24T01:06:04.689254169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-frbkv,Uid:cf53fe2b-64ea-44b8-853b-7d3bdd84d27f,Namespace:calico-system,Attempt:0,}" Apr 24 01:06:04.959521 systemd-networkd[1478]: cali09bb2a07879: Link UP Apr 24 01:06:04.959678 systemd-networkd[1478]: cali09bb2a07879: Gained carrier Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.791 [INFO][4849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0 calico-apiserver-6d5795bdfc- calico-system 91009b28-24e1-4731-ac81-f373176fe1b8 919 0 2026-04-24 01:05:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5795bdfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d5795bdfc-tvpd8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali09bb2a07879 [] [] }} ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.791 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.858 [INFO][4889] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" HandleID="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Workload="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.873 [INFO][4889] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" HandleID="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Workload="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6d5795bdfc-tvpd8", "timestamp":"2026-04-24 01:06:04.858168444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fcdc0)} Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.873 [INFO][4889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.874 [INFO][4889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.874 [INFO][4889] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.883 [INFO][4889] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.893 [INFO][4889] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.901 [INFO][4889] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.904 [INFO][4889] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.913 [INFO][4889] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.913 [INFO][4889] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.920 [INFO][4889] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0 Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.930 [INFO][4889] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.945 [INFO][4889] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.946 [INFO][4889] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" host="localhost" Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.946 [INFO][4889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:06:04.995346 containerd[1574]: 2026-04-24 01:06:04.946 [INFO][4889] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" HandleID="k8s-pod-network.9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Workload="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" Apr 24 01:06:04.996302 containerd[1574]: 2026-04-24 01:06:04.952 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0", GenerateName:"calico-apiserver-6d5795bdfc-", Namespace:"calico-system", SelfLink:"", UID:"91009b28-24e1-4731-ac81-f373176fe1b8", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5795bdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d5795bdfc-tvpd8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali09bb2a07879", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:04.996302 containerd[1574]: 2026-04-24 01:06:04.953 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" Apr 24 01:06:04.996302 containerd[1574]: 2026-04-24 01:06:04.953 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09bb2a07879 ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" Apr 24 01:06:04.996302 containerd[1574]: 2026-04-24 01:06:04.962 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" Apr 24 01:06:04.996302 containerd[1574]: 2026-04-24 01:06:04.964 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0", GenerateName:"calico-apiserver-6d5795bdfc-", Namespace:"calico-system", SelfLink:"", UID:"91009b28-24e1-4731-ac81-f373176fe1b8", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5795bdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0", Pod:"calico-apiserver-6d5795bdfc-tvpd8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali09bb2a07879", MAC:"32:11:68:7a:4f:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:04.996302 containerd[1574]: 2026-04-24 01:06:04.987 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-tvpd8" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--tvpd8-eth0" Apr 24 01:06:05.053946 kubelet[2764]: E0424 01:06:05.053659 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:05.055112 containerd[1574]: time="2026-04-24T01:06:05.055082796Z" level=info msg="connecting to shim 9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0" address="unix:///run/containerd/s/f378b472045b9330e6c87a2e36460e5ac1f4b9aead1f26a701fe9c3b34439a55" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:06:05.124259 systemd[1]: Started cri-containerd-9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0.scope - libcontainer container 9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0. Apr 24 01:06:05.134794 systemd-networkd[1478]: calie9b50d308ca: Link UP Apr 24 01:06:05.141655 systemd-networkd[1478]: calie9b50d308ca: Gained carrier Apr 24 01:06:05.153905 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.793 [INFO][4865] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0 calico-apiserver-6d5795bdfc- calico-system cf53fe2b-64ea-44b8-853b-7d3bdd84d27f 912 0 2026-04-24 01:05:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5795bdfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d5795bdfc-frbkv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie9b50d308ca [] [] }} ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.795 [INFO][4865] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.859 [INFO][4903] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" HandleID="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Workload="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.872 [INFO][4903] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" HandleID="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Workload="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003834d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6d5795bdfc-frbkv", "timestamp":"2026-04-24 01:06:04.859010577 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000577080)} Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.877 [INFO][4903] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.946 [INFO][4903] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.946 [INFO][4903] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:04.988 [INFO][4903] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.005 [INFO][4903] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.026 [INFO][4903] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.043 [INFO][4903] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.048 [INFO][4903] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.048 [INFO][4903] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.051 [INFO][4903] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972 Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.065 [INFO][4903] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.081 [INFO][4903] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.081 [INFO][4903] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" host="localhost" Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.082 [INFO][4903] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:06:05.176340 containerd[1574]: 2026-04-24 01:06:05.084 [INFO][4903] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" HandleID="k8s-pod-network.990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Workload="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" Apr 24 01:06:05.180106 containerd[1574]: 2026-04-24 01:06:05.096 [INFO][4865] cni-plugin/k8s.go 418: Populated endpoint ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0", GenerateName:"calico-apiserver-6d5795bdfc-", Namespace:"calico-system", SelfLink:"", UID:"cf53fe2b-64ea-44b8-853b-7d3bdd84d27f", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5795bdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d5795bdfc-frbkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie9b50d308ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:05.180106 containerd[1574]: 2026-04-24 01:06:05.096 [INFO][4865] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" Apr 24 01:06:05.180106 containerd[1574]: 2026-04-24 01:06:05.096 [INFO][4865] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9b50d308ca ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" Apr 24 01:06:05.180106 containerd[1574]: 2026-04-24 01:06:05.144 [INFO][4865] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" Apr 24 01:06:05.180106 containerd[1574]: 2026-04-24 01:06:05.145 [INFO][4865] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0", GenerateName:"calico-apiserver-6d5795bdfc-", Namespace:"calico-system", SelfLink:"", UID:"cf53fe2b-64ea-44b8-853b-7d3bdd84d27f", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5795bdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972", Pod:"calico-apiserver-6d5795bdfc-frbkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie9b50d308ca", MAC:"2a:b4:a5:c6:13:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:05.180106 containerd[1574]: 2026-04-24 01:06:05.170 [INFO][4865] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" Namespace="calico-system" Pod="calico-apiserver-6d5795bdfc-frbkv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d5795bdfc--frbkv-eth0" Apr 24 01:06:05.226979 systemd-networkd[1478]: calic86deead4ca: Link UP Apr 24 01:06:05.227588 systemd-networkd[1478]: calic86deead4ca: Gained carrier Apr 24 01:06:05.266289 containerd[1574]: time="2026-04-24T01:06:05.266067193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-tvpd8,Uid:91009b28-24e1-4731-ac81-f373176fe1b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0\"" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:04.793 [INFO][4847] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--fwbts-eth0 coredns-66bc5c9577- kube-system 78165a30-d5a7-424b-9f0e-710651ab74af 916 0 2026-04-24 01:05:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-fwbts eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic86deead4ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:04.793 [INFO][4847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:04.873 [INFO][4891] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" HandleID="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Workload="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:04.886 [INFO][4891] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" HandleID="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Workload="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000508e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-fwbts", "timestamp":"2026-04-24 01:06:04.873114788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f14a0)} Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:04.886 [INFO][4891] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.081 [INFO][4891] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.082 [INFO][4891] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.095 [INFO][4891] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.111 [INFO][4891] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.155 [INFO][4891] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.170 [INFO][4891] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.186 [INFO][4891] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.187 [INFO][4891] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.191 [INFO][4891] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255 Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.205 [INFO][4891] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.216 [INFO][4891] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.217 [INFO][4891] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" host="localhost" Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.217 [INFO][4891] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 01:06:05.276958 containerd[1574]: 2026-04-24 01:06:05.217 [INFO][4891] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" HandleID="k8s-pod-network.2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Workload="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" Apr 24 01:06:05.277585 containerd[1574]: 2026-04-24 01:06:05.223 [INFO][4847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fwbts-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"78165a30-d5a7-424b-9f0e-710651ab74af", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-fwbts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86deead4ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:05.277585 containerd[1574]: 2026-04-24 01:06:05.223 [INFO][4847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" Apr 24 01:06:05.277585 containerd[1574]: 2026-04-24 01:06:05.223 [INFO][4847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic86deead4ca ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" Apr 24 01:06:05.277585 containerd[1574]: 2026-04-24 01:06:05.231 [INFO][4847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" Apr 24 01:06:05.277585 containerd[1574]: 2026-04-24 01:06:05.231 [INFO][4847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--fwbts-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"78165a30-d5a7-424b-9f0e-710651ab74af", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 1, 5, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255", Pod:"coredns-66bc5c9577-fwbts", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic86deead4ca", MAC:"ce:1b:4b:04:4e:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 01:06:05.277585 containerd[1574]: 2026-04-24 01:06:05.270 [INFO][4847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" Namespace="kube-system" Pod="coredns-66bc5c9577-fwbts" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--fwbts-eth0" Apr 24 01:06:05.277943 containerd[1574]: time="2026-04-24T01:06:05.277718215Z" level=info msg="connecting to shim 990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972" address="unix:///run/containerd/s/925e783b7bdaf7f040ee4fe72a40f4496f541990c482ac06e4f6223c6e3f18b1" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:06:05.335306 containerd[1574]: time="2026-04-24T01:06:05.332451666Z" level=info msg="connecting to shim 2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255" address="unix:///run/containerd/s/d168ffe90451c0bf36b9141e8f9d3e43d8b102a8b60515dbf203ed95de77428e" namespace=k8s.io protocol=ttrpc version=3 Apr 24 01:06:05.359183 systemd[1]: Started cri-containerd-990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972.scope - libcontainer container 990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972. Apr 24 01:06:05.386472 systemd[1]: Started cri-containerd-2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255.scope - libcontainer container 2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255. Apr 24 01:06:05.402428 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:06:05.419445 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 24 01:06:05.481119 containerd[1574]: time="2026-04-24T01:06:05.480755845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5795bdfc-frbkv,Uid:cf53fe2b-64ea-44b8-853b-7d3bdd84d27f,Namespace:calico-system,Attempt:0,} returns sandbox id \"990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972\"" Apr 24 01:06:05.483458 containerd[1574]: time="2026-04-24T01:06:05.482919666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fwbts,Uid:78165a30-d5a7-424b-9f0e-710651ab74af,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255\"" Apr 24 01:06:05.486489 kubelet[2764]: E0424 01:06:05.486399 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:05.499260 containerd[1574]: time="2026-04-24T01:06:05.498950663Z" level=info msg="CreateContainer within sandbox \"2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 01:06:05.520479 containerd[1574]: time="2026-04-24T01:06:05.520324189Z" level=info msg="Container e62f802e08c6e6066a6a33c73b2b7c480d95cdcc32dc612d5d11a1587b7ee88d: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:06:05.529698 containerd[1574]: time="2026-04-24T01:06:05.529638827Z" level=info msg="CreateContainer within sandbox \"2cd8e3bbe623287f6d44761337b1e90d2041ae5a35fc4b7c4b6fca9813d3e255\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e62f802e08c6e6066a6a33c73b2b7c480d95cdcc32dc612d5d11a1587b7ee88d\"" Apr 24 01:06:05.531967 containerd[1574]: time="2026-04-24T01:06:05.531943077Z" level=info msg="StartContainer for \"e62f802e08c6e6066a6a33c73b2b7c480d95cdcc32dc612d5d11a1587b7ee88d\"" Apr 24 01:06:05.534329 containerd[1574]: time="2026-04-24T01:06:05.534264986Z" level=info msg="connecting to shim e62f802e08c6e6066a6a33c73b2b7c480d95cdcc32dc612d5d11a1587b7ee88d" address="unix:///run/containerd/s/d168ffe90451c0bf36b9141e8f9d3e43d8b102a8b60515dbf203ed95de77428e" protocol=ttrpc version=3 Apr 24 01:06:05.561059 systemd[1]: Started cri-containerd-e62f802e08c6e6066a6a33c73b2b7c480d95cdcc32dc612d5d11a1587b7ee88d.scope - libcontainer container e62f802e08c6e6066a6a33c73b2b7c480d95cdcc32dc612d5d11a1587b7ee88d. Apr 24 01:06:05.608742 containerd[1574]: time="2026-04-24T01:06:05.608623104Z" level=info msg="StartContainer for \"e62f802e08c6e6066a6a33c73b2b7c480d95cdcc32dc612d5d11a1587b7ee88d\" returns successfully" Apr 24 01:06:06.062747 kubelet[2764]: E0424 01:06:06.062664 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:06.067162 kubelet[2764]: E0424 01:06:06.066777 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:06.084087 kubelet[2764]: I0424 01:06:06.083758 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fwbts" podStartSLOduration=51.083619621 podStartE2EDuration="51.083619621s" podCreationTimestamp="2026-04-24 01:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 01:06:06.082035696 +0000 UTC m=+56.600785910" watchObservedRunningTime="2026-04-24 01:06:06.083619621 +0000 UTC m=+56.602369824" Apr 24 01:06:06.431735 containerd[1574]: time="2026-04-24T01:06:06.431414326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:06.432606 containerd[1574]: time="2026-04-24T01:06:06.432287699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 24 01:06:06.433613 containerd[1574]: time="2026-04-24T01:06:06.433525064Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:06.436774 containerd[1574]: time="2026-04-24T01:06:06.436616088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:06.437486 containerd[1574]: time="2026-04-24T01:06:06.437435091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 4.278136286s" Apr 24 01:06:06.437486 containerd[1574]: time="2026-04-24T01:06:06.437458587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 24 01:06:06.440577 containerd[1574]: time="2026-04-24T01:06:06.440546352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 24 01:06:06.457461 containerd[1574]: time="2026-04-24T01:06:06.457024028Z" level=info msg="CreateContainer within sandbox \"da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 01:06:06.465769 containerd[1574]: time="2026-04-24T01:06:06.465703676Z" level=info msg="Container 0852c19d9e4b4957e37f0a97c9db1ab59f29f354a03e20e83393287933dea77b: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:06:06.476004 containerd[1574]: time="2026-04-24T01:06:06.475757590Z" level=info msg="CreateContainer within sandbox \"da73970b4f9d8275f5b7724d35d643c1c89dec56e6c33c6e691568655f5f1b17\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0852c19d9e4b4957e37f0a97c9db1ab59f29f354a03e20e83393287933dea77b\"" Apr 24 01:06:06.477107 containerd[1574]: time="2026-04-24T01:06:06.476932937Z" level=info msg="StartContainer for \"0852c19d9e4b4957e37f0a97c9db1ab59f29f354a03e20e83393287933dea77b\"" Apr 24 01:06:06.477793 containerd[1574]: time="2026-04-24T01:06:06.477729271Z" level=info msg="connecting to shim 0852c19d9e4b4957e37f0a97c9db1ab59f29f354a03e20e83393287933dea77b" address="unix:///run/containerd/s/be7f7a0c8cc271a4cab5d3fa69577fdc39fb2fc716e69c705e3b04dc53e571ea" protocol=ttrpc version=3 Apr 24 01:06:06.511081 systemd-networkd[1478]: calic86deead4ca: Gained IPv6LL Apr 24 01:06:06.515045 systemd[1]: Started cri-containerd-0852c19d9e4b4957e37f0a97c9db1ab59f29f354a03e20e83393287933dea77b.scope - libcontainer container 0852c19d9e4b4957e37f0a97c9db1ab59f29f354a03e20e83393287933dea77b. Apr 24 01:06:06.586249 containerd[1574]: time="2026-04-24T01:06:06.586122643Z" level=info msg="StartContainer for \"0852c19d9e4b4957e37f0a97c9db1ab59f29f354a03e20e83393287933dea77b\" returns successfully" Apr 24 01:06:06.768651 systemd-networkd[1478]: cali09bb2a07879: Gained IPv6LL Apr 24 01:06:06.894327 systemd-networkd[1478]: calie9b50d308ca: Gained IPv6LL Apr 24 01:06:07.078896 kubelet[2764]: E0424 01:06:07.078238 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:07.106518 kubelet[2764]: I0424 01:06:07.106389 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-566c9b6b64-c8nbk" podStartSLOduration=36.770611496 podStartE2EDuration="41.106050928s" podCreationTimestamp="2026-04-24 01:05:26 +0000 UTC" firstStartedPulling="2026-04-24 01:06:02.104357417 +0000 UTC m=+52.623107621" lastFinishedPulling="2026-04-24 01:06:06.43979685 +0000 UTC m=+56.958547053" observedRunningTime="2026-04-24 01:06:07.104029616 +0000 UTC m=+57.622779823" watchObservedRunningTime="2026-04-24 01:06:07.106050928 +0000 UTC m=+57.624801139" Apr 24 01:06:08.084595 kubelet[2764]: E0424 01:06:08.084432 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:09.243807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150848425.mount: Deactivated successfully. Apr 24 01:06:09.390149 systemd[1]: Started sshd@12-10.0.0.5:22-10.0.0.1:37940.service - OpenSSH per-connection server daemon (10.0.0.1:37940). Apr 24 01:06:09.487530 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 37940 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:09.489696 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:09.498755 systemd-logind[1560]: New session 13 of user core. Apr 24 01:06:09.503102 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 01:06:09.679106 sshd[5206]: Connection closed by 10.0.0.1 port 37940 Apr 24 01:06:09.679694 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:09.700597 systemd[1]: sshd@12-10.0.0.5:22-10.0.0.1:37940.service: Deactivated successfully. Apr 24 01:06:09.706353 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 01:06:09.709519 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Apr 24 01:06:09.717345 systemd[1]: Started sshd@13-10.0.0.5:22-10.0.0.1:37952.service - OpenSSH per-connection server daemon (10.0.0.1:37952). Apr 24 01:06:09.718446 systemd-logind[1560]: Removed session 13. Apr 24 01:06:09.812688 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 37952 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:09.813518 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:09.835003 systemd-logind[1560]: New session 14 of user core. Apr 24 01:06:09.838039 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 01:06:10.132054 sshd[5228]: Connection closed by 10.0.0.1 port 37952 Apr 24 01:06:10.161679 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:10.172165 systemd[1]: sshd@13-10.0.0.5:22-10.0.0.1:37952.service: Deactivated successfully. Apr 24 01:06:10.174502 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 01:06:10.184518 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Apr 24 01:06:10.189075 systemd[1]: Started sshd@14-10.0.0.5:22-10.0.0.1:37956.service - OpenSSH per-connection server daemon (10.0.0.1:37956). Apr 24 01:06:10.194945 systemd-logind[1560]: Removed session 14. Apr 24 01:06:10.262039 sshd[5243]: Accepted publickey for core from 10.0.0.1 port 37956 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:10.265687 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:10.288244 systemd-logind[1560]: New session 15 of user core. Apr 24 01:06:10.293115 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 01:06:10.538647 sshd[5246]: Connection closed by 10.0.0.1 port 37956 Apr 24 01:06:10.541151 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:10.547271 systemd[1]: sshd@14-10.0.0.5:22-10.0.0.1:37956.service: Deactivated successfully. Apr 24 01:06:10.550559 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 01:06:10.552758 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Apr 24 01:06:10.555095 systemd-logind[1560]: Removed session 15. Apr 24 01:06:10.600947 containerd[1574]: time="2026-04-24T01:06:10.600689384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:10.603025 containerd[1574]: time="2026-04-24T01:06:10.602990856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 24 01:06:10.605015 containerd[1574]: time="2026-04-24T01:06:10.604685781Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:10.608033 containerd[1574]: time="2026-04-24T01:06:10.607937745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:10.608556 containerd[1574]: time="2026-04-24T01:06:10.608480991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 4.167915697s" Apr 24 01:06:10.608556 containerd[1574]: time="2026-04-24T01:06:10.608546717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 24 01:06:10.610754 containerd[1574]: time="2026-04-24T01:06:10.610720698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 01:06:10.617771 containerd[1574]: time="2026-04-24T01:06:10.617292913Z" level=info msg="CreateContainer within sandbox \"399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 01:06:10.632142 containerd[1574]: time="2026-04-24T01:06:10.632019575Z" level=info msg="Container 24a5376284d7a4675abb835aa9f3258b73a54943fdd6767e31f704dc6c9ccf4d: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:06:10.643258 containerd[1574]: time="2026-04-24T01:06:10.643037309Z" level=info msg="CreateContainer within sandbox \"399dd4999e13e83882d5fa412c121927b91a7050ce9808a224e49b933c8c7577\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"24a5376284d7a4675abb835aa9f3258b73a54943fdd6767e31f704dc6c9ccf4d\"" Apr 24 01:06:10.645141 containerd[1574]: time="2026-04-24T01:06:10.645001289Z" level=info msg="StartContainer for \"24a5376284d7a4675abb835aa9f3258b73a54943fdd6767e31f704dc6c9ccf4d\"" Apr 24 01:06:10.649648 containerd[1574]: time="2026-04-24T01:06:10.649311584Z" level=info msg="connecting to shim 24a5376284d7a4675abb835aa9f3258b73a54943fdd6767e31f704dc6c9ccf4d" address="unix:///run/containerd/s/89b10fa2ef49d717cd6614fc8abb3c11e15a1d360177c8697e2e1e0cd48b1068" protocol=ttrpc version=3 Apr 24 01:06:10.725414 systemd[1]: Started cri-containerd-24a5376284d7a4675abb835aa9f3258b73a54943fdd6767e31f704dc6c9ccf4d.scope - libcontainer container 24a5376284d7a4675abb835aa9f3258b73a54943fdd6767e31f704dc6c9ccf4d. Apr 24 01:06:10.818628 containerd[1574]: time="2026-04-24T01:06:10.818361691Z" level=info msg="StartContainer for \"24a5376284d7a4675abb835aa9f3258b73a54943fdd6767e31f704dc6c9ccf4d\" returns successfully" Apr 24 01:06:15.554998 systemd[1]: Started sshd@15-10.0.0.5:22-10.0.0.1:39184.service - OpenSSH per-connection server daemon (10.0.0.1:39184). Apr 24 01:06:15.598730 containerd[1574]: time="2026-04-24T01:06:15.597934056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:15.598730 containerd[1574]: time="2026-04-24T01:06:15.598340887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 24 01:06:15.604026 containerd[1574]: time="2026-04-24T01:06:15.603794778Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:15.607307 containerd[1574]: time="2026-04-24T01:06:15.605957742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 4.995047341s" Apr 24 01:06:15.607307 containerd[1574]: time="2026-04-24T01:06:15.606287938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 01:06:15.607307 containerd[1574]: time="2026-04-24T01:06:15.607452659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:15.612691 containerd[1574]: time="2026-04-24T01:06:15.612660484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 01:06:15.624624 containerd[1574]: time="2026-04-24T01:06:15.624498441Z" level=info msg="CreateContainer within sandbox \"9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 01:06:15.636133 containerd[1574]: time="2026-04-24T01:06:15.635705271Z" level=info msg="Container e75eec2eb3f403bfa8cfcccee597efeb09a82619277abb2d363d8b04f1406991: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:06:15.651254 containerd[1574]: time="2026-04-24T01:06:15.651096983Z" level=info msg="CreateContainer within sandbox \"9f2e8eb4817ceeadd1496d09135ed8b999bb675decc479f5094f4ad791a284a0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e75eec2eb3f403bfa8cfcccee597efeb09a82619277abb2d363d8b04f1406991\"" Apr 24 01:06:15.656030 containerd[1574]: time="2026-04-24T01:06:15.654296295Z" level=info msg="StartContainer for \"e75eec2eb3f403bfa8cfcccee597efeb09a82619277abb2d363d8b04f1406991\"" Apr 24 01:06:15.661934 containerd[1574]: time="2026-04-24T01:06:15.661728051Z" level=info msg="connecting to shim e75eec2eb3f403bfa8cfcccee597efeb09a82619277abb2d363d8b04f1406991" address="unix:///run/containerd/s/f378b472045b9330e6c87a2e36460e5ac1f4b9aead1f26a701fe9c3b34439a55" protocol=ttrpc version=3 Apr 24 01:06:15.720744 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 39184 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:15.721393 systemd[1]: Started cri-containerd-e75eec2eb3f403bfa8cfcccee597efeb09a82619277abb2d363d8b04f1406991.scope - libcontainer container e75eec2eb3f403bfa8cfcccee597efeb09a82619277abb2d363d8b04f1406991. Apr 24 01:06:15.724068 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:15.739764 systemd-logind[1560]: New session 16 of user core. Apr 24 01:06:15.744005 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 01:06:15.830472 containerd[1574]: time="2026-04-24T01:06:15.830213404Z" level=info msg="StartContainer for \"e75eec2eb3f403bfa8cfcccee597efeb09a82619277abb2d363d8b04f1406991\" returns successfully" Apr 24 01:06:15.945394 sshd[5386]: Connection closed by 10.0.0.1 port 39184 Apr 24 01:06:15.946226 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:15.950476 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Apr 24 01:06:15.950684 systemd[1]: sshd@15-10.0.0.5:22-10.0.0.1:39184.service: Deactivated successfully. Apr 24 01:06:15.952491 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 01:06:15.954682 systemd-logind[1560]: Removed session 16. Apr 24 01:06:16.073524 containerd[1574]: time="2026-04-24T01:06:16.073356356Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 01:06:16.076995 containerd[1574]: time="2026-04-24T01:06:16.076749538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 24 01:06:16.079303 containerd[1574]: time="2026-04-24T01:06:16.079062032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 466.070562ms" Apr 24 01:06:16.079303 containerd[1574]: time="2026-04-24T01:06:16.079131296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 01:06:16.087434 containerd[1574]: time="2026-04-24T01:06:16.086950544Z" level=info msg="CreateContainer within sandbox \"990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 01:06:16.098334 containerd[1574]: time="2026-04-24T01:06:16.097504357Z" level=info msg="Container 9c9fc49ee06f11ffa8156e5719a1e84d62f6d37548500e42a3e84ae847a0674e: CDI devices from CRI Config.CDIDevices: []" Apr 24 01:06:16.135035 containerd[1574]: time="2026-04-24T01:06:16.134930616Z" level=info msg="CreateContainer within sandbox \"990840f48f0b1d9a0ef7d294cb8be83007c621f899c33277e11ba6488dc66972\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c9fc49ee06f11ffa8156e5719a1e84d62f6d37548500e42a3e84ae847a0674e\"" Apr 24 01:06:16.139733 containerd[1574]: time="2026-04-24T01:06:16.139572594Z" level=info msg="StartContainer for \"9c9fc49ee06f11ffa8156e5719a1e84d62f6d37548500e42a3e84ae847a0674e\"" Apr 24 01:06:16.141986 containerd[1574]: time="2026-04-24T01:06:16.141731119Z" level=info msg="connecting to shim 9c9fc49ee06f11ffa8156e5719a1e84d62f6d37548500e42a3e84ae847a0674e" address="unix:///run/containerd/s/925e783b7bdaf7f040ee4fe72a40f4496f541990c482ac06e4f6223c6e3f18b1" protocol=ttrpc version=3 Apr 24 01:06:16.184736 systemd[1]: Started cri-containerd-9c9fc49ee06f11ffa8156e5719a1e84d62f6d37548500e42a3e84ae847a0674e.scope - libcontainer container 9c9fc49ee06f11ffa8156e5719a1e84d62f6d37548500e42a3e84ae847a0674e. Apr 24 01:06:16.207313 kubelet[2764]: I0424 01:06:16.206971 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-6b4b7f4496-vf9tk" podStartSLOduration=43.908929132 podStartE2EDuration="51.206955531s" podCreationTimestamp="2026-04-24 01:05:25 +0000 UTC" firstStartedPulling="2026-04-24 01:06:03.311609976 +0000 UTC m=+53.830360179" lastFinishedPulling="2026-04-24 01:06:10.609636375 +0000 UTC m=+61.128386578" observedRunningTime="2026-04-24 01:06:11.139334183 +0000 UTC m=+61.658084393" watchObservedRunningTime="2026-04-24 01:06:16.206955531 +0000 UTC m=+66.725705746" Apr 24 01:06:16.208302 kubelet[2764]: I0424 01:06:16.207331 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d5795bdfc-tvpd8" podStartSLOduration=40.872616522 podStartE2EDuration="51.207326166s" podCreationTimestamp="2026-04-24 01:05:25 +0000 UTC" firstStartedPulling="2026-04-24 01:06:05.274671788 +0000 UTC m=+55.793421991" lastFinishedPulling="2026-04-24 01:06:15.609381431 +0000 UTC m=+66.128131635" observedRunningTime="2026-04-24 01:06:16.206524511 +0000 UTC m=+66.725274722" watchObservedRunningTime="2026-04-24 01:06:16.207326166 +0000 UTC m=+66.726076380" Apr 24 01:06:16.306716 containerd[1574]: time="2026-04-24T01:06:16.306266711Z" level=info msg="StartContainer for \"9c9fc49ee06f11ffa8156e5719a1e84d62f6d37548500e42a3e84ae847a0674e\" returns successfully" Apr 24 01:06:17.586205 kubelet[2764]: I0424 01:06:17.585925 2764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6d5795bdfc-frbkv" podStartSLOduration=41.994056426 podStartE2EDuration="52.585794675s" podCreationTimestamp="2026-04-24 01:05:25 +0000 UTC" firstStartedPulling="2026-04-24 01:06:05.489008333 +0000 UTC m=+56.007758536" lastFinishedPulling="2026-04-24 01:06:16.080746582 +0000 UTC m=+66.599496785" observedRunningTime="2026-04-24 01:06:17.233552413 +0000 UTC m=+67.752302628" watchObservedRunningTime="2026-04-24 01:06:17.585794675 +0000 UTC m=+68.104544895" Apr 24 01:06:20.960236 systemd[1]: Started sshd@16-10.0.0.5:22-10.0.0.1:39192.service - OpenSSH per-connection server daemon (10.0.0.1:39192). Apr 24 01:06:21.047424 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 39192 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:21.048991 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:21.057800 systemd-logind[1560]: New session 17 of user core. Apr 24 01:06:21.066115 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 01:06:21.202285 sshd[5474]: Connection closed by 10.0.0.1 port 39192 Apr 24 01:06:21.202003 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:21.211929 systemd[1]: sshd@16-10.0.0.5:22-10.0.0.1:39192.service: Deactivated successfully. Apr 24 01:06:21.214028 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 01:06:21.215278 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Apr 24 01:06:21.217994 systemd[1]: Started sshd@17-10.0.0.5:22-10.0.0.1:39196.service - OpenSSH per-connection server daemon (10.0.0.1:39196). Apr 24 01:06:21.218698 systemd-logind[1560]: Removed session 17. Apr 24 01:06:21.281483 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 39196 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:21.284244 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:21.294301 systemd-logind[1560]: New session 18 of user core. Apr 24 01:06:21.304706 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 01:06:21.707193 sshd[5490]: Connection closed by 10.0.0.1 port 39196 Apr 24 01:06:21.708346 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:21.719811 systemd[1]: Started sshd@18-10.0.0.5:22-10.0.0.1:39202.service - OpenSSH per-connection server daemon (10.0.0.1:39202). Apr 24 01:06:21.720444 systemd[1]: sshd@17-10.0.0.5:22-10.0.0.1:39196.service: Deactivated successfully. Apr 24 01:06:21.723508 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 01:06:21.725066 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Apr 24 01:06:21.731456 systemd-logind[1560]: Removed session 18. Apr 24 01:06:21.823652 sshd[5499]: Accepted publickey for core from 10.0.0.1 port 39202 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:21.825268 sshd-session[5499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:21.831422 systemd-logind[1560]: New session 19 of user core. Apr 24 01:06:21.839490 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 01:06:22.596298 sshd[5505]: Connection closed by 10.0.0.1 port 39202 Apr 24 01:06:22.597387 sshd-session[5499]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:22.610025 systemd[1]: sshd@18-10.0.0.5:22-10.0.0.1:39202.service: Deactivated successfully. Apr 24 01:06:22.612312 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 01:06:22.616364 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Apr 24 01:06:22.619581 systemd[1]: Started sshd@19-10.0.0.5:22-10.0.0.1:39214.service - OpenSSH per-connection server daemon (10.0.0.1:39214). Apr 24 01:06:22.623620 systemd-logind[1560]: Removed session 19. Apr 24 01:06:22.689412 sshd[5522]: Accepted publickey for core from 10.0.0.1 port 39214 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:22.690732 sshd-session[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:22.697733 systemd-logind[1560]: New session 20 of user core. Apr 24 01:06:22.706083 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 01:06:23.069237 sshd[5525]: Connection closed by 10.0.0.1 port 39214 Apr 24 01:06:23.069771 sshd-session[5522]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:23.088989 systemd[1]: sshd@19-10.0.0.5:22-10.0.0.1:39214.service: Deactivated successfully. Apr 24 01:06:23.091477 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 01:06:23.092573 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Apr 24 01:06:23.097742 systemd[1]: Started sshd@20-10.0.0.5:22-10.0.0.1:39224.service - OpenSSH per-connection server daemon (10.0.0.1:39224). Apr 24 01:06:23.102637 systemd-logind[1560]: Removed session 20. Apr 24 01:06:23.175922 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 39224 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:23.177224 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:23.187416 systemd-logind[1560]: New session 21 of user core. Apr 24 01:06:23.195630 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 01:06:23.325032 sshd[5540]: Connection closed by 10.0.0.1 port 39224 Apr 24 01:06:23.325186 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:23.329212 systemd[1]: sshd@20-10.0.0.5:22-10.0.0.1:39224.service: Deactivated successfully. Apr 24 01:06:23.331407 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 01:06:23.332738 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Apr 24 01:06:23.335782 systemd-logind[1560]: Removed session 21. Apr 24 01:06:27.680379 kubelet[2764]: E0424 01:06:27.679777 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:28.340307 systemd[1]: Started sshd@21-10.0.0.5:22-10.0.0.1:38918.service - OpenSSH per-connection server daemon (10.0.0.1:38918). Apr 24 01:06:28.446526 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 38918 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:28.447759 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:28.453555 systemd-logind[1560]: New session 22 of user core. Apr 24 01:06:28.459039 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 24 01:06:28.659236 sshd[5612]: Connection closed by 10.0.0.1 port 38918 Apr 24 01:06:28.659503 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:28.663474 systemd[1]: sshd@21-10.0.0.5:22-10.0.0.1:38918.service: Deactivated successfully. Apr 24 01:06:28.665685 systemd[1]: session-22.scope: Deactivated successfully. Apr 24 01:06:28.668660 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Apr 24 01:06:28.671910 systemd-logind[1560]: Removed session 22. Apr 24 01:06:29.680998 kubelet[2764]: E0424 01:06:29.680807 2764 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 24 01:06:33.677635 systemd[1]: Started sshd@22-10.0.0.5:22-10.0.0.1:38922.service - OpenSSH per-connection server daemon (10.0.0.1:38922). Apr 24 01:06:33.738606 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 38922 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:33.739810 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:33.745746 systemd-logind[1560]: New session 23 of user core. Apr 24 01:06:33.754269 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 24 01:06:33.874075 sshd[5629]: Connection closed by 10.0.0.1 port 38922 Apr 24 01:06:33.874398 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:33.877977 systemd[1]: sshd@22-10.0.0.5:22-10.0.0.1:38922.service: Deactivated successfully. Apr 24 01:06:33.880186 systemd[1]: session-23.scope: Deactivated successfully. Apr 24 01:06:33.881431 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Apr 24 01:06:33.883507 systemd-logind[1560]: Removed session 23. Apr 24 01:06:38.888884 systemd[1]: Started sshd@23-10.0.0.5:22-10.0.0.1:48648.service - OpenSSH per-connection server daemon (10.0.0.1:48648). Apr 24 01:06:38.934776 sshd[5697]: Accepted publickey for core from 10.0.0.1 port 48648 ssh2: RSA SHA256:DM1SznRiDAOUZOZJtyobaoKOe1PzAMkOa49bF27zJ78 Apr 24 01:06:38.935746 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 01:06:38.939962 systemd-logind[1560]: New session 24 of user core. Apr 24 01:06:38.950964 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 24 01:06:39.027628 sshd[5700]: Connection closed by 10.0.0.1 port 48648 Apr 24 01:06:39.027976 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Apr 24 01:06:39.030797 systemd[1]: sshd@23-10.0.0.5:22-10.0.0.1:48648.service: Deactivated successfully. Apr 24 01:06:39.032371 systemd[1]: session-24.scope: Deactivated successfully. Apr 24 01:06:39.033189 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Apr 24 01:06:39.034184 systemd-logind[1560]: Removed session 24.