Oct 13 05:44:35.907863 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:37:12 -00 2025 Oct 13 05:44:35.907893 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:44:35.907905 kernel: BIOS-provided physical RAM map: Oct 13 05:44:35.907912 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:44:35.907918 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 13 05:44:35.907924 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 13 05:44:35.907932 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 13 05:44:35.907939 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 13 05:44:35.907948 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 13 05:44:35.907957 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 13 05:44:35.907965 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 13 05:44:35.907971 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 13 05:44:35.907978 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 13 05:44:35.907985 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 13 05:44:35.907993 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 13 05:44:35.908003 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 13 05:44:35.908013 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 13 05:44:35.908020 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 13 05:44:35.908027 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 13 05:44:35.908034 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 13 05:44:35.908041 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 13 05:44:35.908048 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 13 05:44:35.908055 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:44:35.908062 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:44:35.908069 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 13 05:44:35.908079 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:44:35.908086 kernel: NX (Execute Disable) protection: active Oct 13 05:44:35.908093 kernel: APIC: Static calls initialized Oct 13 05:44:35.908100 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Oct 13 05:44:35.908107 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Oct 13 05:44:35.908114 kernel: extended physical RAM map: Oct 13 05:44:35.908121 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:44:35.908128 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 13 05:44:35.908136 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 13 05:44:35.908143 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 13 05:44:35.908150 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 13 05:44:35.908159 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 13 05:44:35.908166 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 13 05:44:35.908173 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Oct 13 05:44:35.908180 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Oct 13 05:44:35.908191 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Oct 13 05:44:35.908199 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Oct 13 05:44:35.908208 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Oct 13 05:44:35.908216 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 13 05:44:35.908223 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 13 05:44:35.908231 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 13 05:44:35.908238 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 13 05:44:35.908245 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 13 05:44:35.908253 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 13 05:44:35.908260 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 13 05:44:35.908268 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 13 05:44:35.908284 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 13 05:44:35.908293 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 13 05:44:35.908301 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 13 05:44:35.908316 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:44:35.908324 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:44:35.908331 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 13 05:44:35.908338 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:44:35.908363 kernel: efi: EFI v2.7 by EDK II Oct 13 05:44:35.908371 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 13 05:44:35.908392 kernel: random: crng init done Oct 13 05:44:35.908403 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 13 05:44:35.908411 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 13 05:44:35.908424 kernel: secureboot: Secure boot disabled Oct 13 05:44:35.908431 kernel: SMBIOS 2.8 present. Oct 13 05:44:35.908439 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 13 05:44:35.908446 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:44:35.908453 kernel: Hypervisor detected: KVM Oct 13 05:44:35.908460 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:44:35.908468 kernel: kvm-clock: using sched offset of 5339309206 cycles Oct 13 05:44:35.908476 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:44:35.908484 kernel: tsc: Detected 2794.750 MHz processor Oct 13 05:44:35.908491 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:44:35.908509 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:44:35.908521 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 13 05:44:35.908545 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:44:35.908553 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:44:35.908561 kernel: Using GB pages for direct mapping Oct 13 05:44:35.908569 kernel: ACPI: Early table checksum verification disabled Oct 13 05:44:35.908576 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 13 05:44:35.908584 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 13 05:44:35.908592 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:44:35.908599 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:44:35.908611 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 13 05:44:35.908618 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:44:35.908629 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:44:35.908638 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:44:35.908645 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:44:35.908653 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 13 05:44:35.908661 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 13 05:44:35.908668 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 13 05:44:35.908679 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 13 05:44:35.908687 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 13 05:44:35.908694 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 13 05:44:35.908702 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 13 05:44:35.908709 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 13 05:44:35.908717 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 13 05:44:35.908724 kernel: No NUMA configuration found Oct 13 05:44:35.908731 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 13 05:44:35.908739 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 13 05:44:35.908747 kernel: Zone ranges: Oct 13 05:44:35.908757 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:44:35.908764 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 13 05:44:35.908771 kernel: Normal empty Oct 13 05:44:35.908779 kernel: Device empty Oct 13 05:44:35.908786 kernel: Movable zone start for each node Oct 13 05:44:35.908794 kernel: Early memory node ranges Oct 13 05:44:35.908801 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 13 05:44:35.908810 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 13 05:44:35.908824 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 13 05:44:35.908837 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 13 05:44:35.908847 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 13 05:44:35.908857 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 13 05:44:35.908867 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 13 05:44:35.908876 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 13 05:44:35.908885 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 13 05:44:35.908895 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:44:35.908908 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 13 05:44:35.908931 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 13 05:44:35.908941 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:44:35.908952 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 13 05:44:35.908962 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 13 05:44:35.908976 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 13 05:44:35.908987 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 13 05:44:35.908997 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 13 05:44:35.909006 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:44:35.909014 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:44:35.909025 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:44:35.909033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:44:35.909041 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:44:35.909048 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:44:35.909056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:44:35.909064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:44:35.909072 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:44:35.909080 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:44:35.909088 kernel: TSC deadline timer available Oct 13 05:44:35.909098 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:44:35.909106 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:44:35.909114 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:44:35.909122 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:44:35.909129 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:44:35.909138 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:44:35.909149 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:44:35.909159 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:44:35.909170 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:44:35.909181 kernel: kvm-guest: setup PV sched yield Oct 13 05:44:35.909195 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 13 05:44:35.909205 kernel: Booting paravirtualized kernel on KVM Oct 13 05:44:35.909216 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:44:35.909224 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:44:35.909232 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:44:35.909240 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:44:35.909248 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:44:35.909255 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:44:35.909266 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:44:35.909275 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:44:35.909288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:44:35.909296 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:44:35.909304 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:44:35.909320 kernel: Fallback order for Node 0: 0 Oct 13 05:44:35.909328 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 13 05:44:35.909336 kernel: Policy zone: DMA32 Oct 13 05:44:35.909344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:44:35.909432 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:44:35.909440 kernel: ftrace: allocating 40139 entries in 157 pages Oct 13 05:44:35.909448 kernel: ftrace: allocated 157 pages with 5 groups Oct 13 05:44:35.909467 kernel: Dynamic Preempt: voluntary Oct 13 05:44:35.909483 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:44:35.909498 kernel: rcu: RCU event tracing is enabled. Oct 13 05:44:35.909506 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:44:35.909515 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:44:35.909523 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:44:35.909534 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:44:35.909545 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:44:35.909557 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:44:35.909565 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:44:35.909574 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:44:35.909582 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:44:35.909590 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:44:35.909598 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:44:35.909606 kernel: Console: colour dummy device 80x25 Oct 13 05:44:35.909617 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:44:35.909625 kernel: ACPI: Core revision 20240827 Oct 13 05:44:35.909633 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:44:35.909641 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:44:35.909648 kernel: x2apic enabled Oct 13 05:44:35.909657 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:44:35.909664 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:44:35.909673 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:44:35.909680 kernel: kvm-guest: setup PV IPIs Oct 13 05:44:35.909691 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:44:35.909699 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:44:35.909707 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 13 05:44:35.909715 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:44:35.909723 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:44:35.909731 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:44:35.909739 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:44:35.909747 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:44:35.909757 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:44:35.909765 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:44:35.909773 kernel: active return thunk: retbleed_return_thunk Oct 13 05:44:35.909781 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:44:35.909792 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:44:35.909800 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:44:35.909808 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:44:35.909817 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:44:35.909825 kernel: active return thunk: srso_return_thunk Oct 13 05:44:35.909836 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:44:35.909844 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:44:35.909853 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:44:35.909862 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:44:35.909871 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:44:35.909880 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:44:35.909888 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:44:35.909896 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:44:35.909904 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:44:35.909914 kernel: landlock: Up and running. Oct 13 05:44:35.909922 kernel: SELinux: Initializing. Oct 13 05:44:35.909930 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:44:35.909938 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:44:35.909947 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:44:35.909955 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:44:35.909962 kernel: ... version: 0 Oct 13 05:44:35.909970 kernel: ... bit width: 48 Oct 13 05:44:35.909978 kernel: ... generic registers: 6 Oct 13 05:44:35.909988 kernel: ... value mask: 0000ffffffffffff Oct 13 05:44:35.909996 kernel: ... max period: 00007fffffffffff Oct 13 05:44:35.910004 kernel: ... fixed-purpose events: 0 Oct 13 05:44:35.910011 kernel: ... event mask: 000000000000003f Oct 13 05:44:35.910019 kernel: signal: max sigframe size: 1776 Oct 13 05:44:35.910027 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:44:35.910035 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:44:35.910046 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:44:35.910057 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:44:35.910070 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:44:35.910081 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:44:35.910091 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:44:35.910102 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 13 05:44:35.910113 kernel: Memory: 2422668K/2565800K available (14336K kernel code, 2443K rwdata, 10000K rodata, 54096K init, 2852K bss, 137196K reserved, 0K cma-reserved) Oct 13 05:44:35.910124 kernel: devtmpfs: initialized Oct 13 05:44:35.910135 kernel: x86/mm: Memory block size: 128MB Oct 13 05:44:35.910146 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 13 05:44:35.910156 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 13 05:44:35.910172 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 13 05:44:35.910182 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 13 05:44:35.910193 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 13 05:44:35.910203 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 13 05:44:35.910211 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:44:35.910219 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:44:35.910227 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:44:35.910236 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:44:35.910246 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:44:35.910259 kernel: audit: type=2000 audit(1760334272.224:1): state=initialized audit_enabled=0 res=1 Oct 13 05:44:35.910269 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:44:35.910280 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:44:35.910291 kernel: cpuidle: using governor menu Oct 13 05:44:35.910301 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:44:35.910321 kernel: dca service started, version 1.12.1 Oct 13 05:44:35.910329 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 13 05:44:35.910337 kernel: PCI: Using configuration type 1 for base access Oct 13 05:44:35.910346 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:44:35.910373 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:44:35.910382 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:44:35.910389 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:44:35.910397 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:44:35.910407 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:44:35.910418 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:44:35.910429 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:44:35.910440 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:44:35.910454 kernel: ACPI: Interpreter enabled Oct 13 05:44:35.910464 kernel: ACPI: PM: (supports S0 S3 S5) Oct 13 05:44:35.910474 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:44:35.910485 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:44:35.910495 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:44:35.910506 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:44:35.910514 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:44:35.910748 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:44:35.910879 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:44:35.911001 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:44:35.911011 kernel: PCI host bridge to bus 0000:00 Oct 13 05:44:35.911146 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:44:35.911258 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:44:35.911398 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:44:35.911514 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 13 05:44:35.911650 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 13 05:44:35.911786 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:44:35.911898 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:44:35.912300 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:44:35.912497 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:44:35.912622 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 13 05:44:35.912743 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 13 05:44:35.912868 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 13 05:44:35.912988 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:44:35.913130 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:44:35.913285 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 13 05:44:35.913474 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 13 05:44:35.913623 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 13 05:44:35.913765 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:44:35.913894 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 13 05:44:35.914015 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 13 05:44:35.914135 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 13 05:44:35.914272 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:44:35.914427 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 13 05:44:35.914582 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 13 05:44:35.914745 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 13 05:44:35.914889 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 13 05:44:35.915035 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:44:35.915165 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:44:35.915305 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:44:35.915461 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 13 05:44:35.915582 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 13 05:44:35.915723 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:44:35.915845 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 13 05:44:35.915856 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:44:35.915865 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:44:35.915875 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:44:35.915884 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:44:35.915894 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:44:35.915902 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:44:35.915913 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:44:35.915921 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:44:35.915929 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:44:35.915937 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:44:35.915945 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:44:35.915954 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:44:35.915962 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:44:35.915970 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:44:35.915978 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:44:35.915988 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:44:35.915996 kernel: iommu: Default domain type: Translated Oct 13 05:44:35.916004 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:44:35.916012 kernel: efivars: Registered efivars operations Oct 13 05:44:35.916020 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:44:35.916028 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:44:35.916037 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 13 05:44:35.916047 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 13 05:44:35.916058 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Oct 13 05:44:35.916072 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Oct 13 05:44:35.916083 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 13 05:44:35.916094 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 13 05:44:35.916104 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 13 05:44:35.916115 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 13 05:44:35.916269 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:44:35.916430 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:44:35.916560 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:44:35.916575 kernel: vgaarb: loaded Oct 13 05:44:35.916583 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:44:35.916592 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:44:35.916599 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:44:35.916608 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:44:35.916616 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:44:35.916624 kernel: pnp: PnP ACPI init Oct 13 05:44:35.916859 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 13 05:44:35.916878 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:44:35.916887 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:44:35.916895 kernel: NET: Registered PF_INET protocol family Oct 13 05:44:35.916904 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:44:35.916914 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:44:35.916923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:44:35.916932 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:44:35.916940 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:44:35.916948 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:44:35.916959 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:44:35.916967 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:44:35.916976 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:44:35.916984 kernel: NET: Registered PF_XDP protocol family Oct 13 05:44:35.917107 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 13 05:44:35.917229 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 13 05:44:35.917367 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:44:35.917514 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:44:35.917636 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:44:35.917767 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 13 05:44:35.917883 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 13 05:44:35.917992 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:44:35.918004 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:44:35.918013 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:44:35.918022 kernel: Initialise system trusted keyrings Oct 13 05:44:35.918035 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:44:35.918043 kernel: Key type asymmetric registered Oct 13 05:44:35.918051 kernel: Asymmetric key parser 'x509' registered Oct 13 05:44:35.918059 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:44:35.918068 kernel: io scheduler mq-deadline registered Oct 13 05:44:35.918076 kernel: io scheduler kyber registered Oct 13 05:44:35.918084 kernel: io scheduler bfq registered Oct 13 05:44:35.918095 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:44:35.918104 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:44:35.918113 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:44:35.918121 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:44:35.918129 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:44:35.918138 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:44:35.918146 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:44:35.918154 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:44:35.918163 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:44:35.918299 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:44:35.918321 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:44:35.918455 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:44:35.918598 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:44:35 UTC (1760334275) Oct 13 05:44:35.918729 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 13 05:44:35.918741 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:44:35.918749 kernel: efifb: probing for efifb Oct 13 05:44:35.918758 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 13 05:44:35.918771 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 13 05:44:35.918780 kernel: efifb: scrolling: redraw Oct 13 05:44:35.918788 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:44:35.918796 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 05:44:35.918805 kernel: fb0: EFI VGA frame buffer device Oct 13 05:44:35.918816 kernel: pstore: Using crash dump compression: deflate Oct 13 05:44:35.918828 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:44:35.918839 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:44:35.918851 kernel: Segment Routing with IPv6 Oct 13 05:44:35.918866 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:44:35.918880 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:44:35.918890 kernel: Key type dns_resolver registered Oct 13 05:44:35.918899 kernel: IPI shorthand broadcast: enabled Oct 13 05:44:35.918908 kernel: sched_clock: Marking stable (3970003005, 289065468)->(4407865467, -148796994) Oct 13 05:44:35.918916 kernel: registered taskstats version 1 Oct 13 05:44:35.918924 kernel: Loading compiled-in X.509 certificates Oct 13 05:44:35.918933 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: d8dbf4abead15098249886d373d42a3af4f50ccd' Oct 13 05:44:35.918941 kernel: Demotion targets for Node 0: null Oct 13 05:44:35.918952 kernel: Key type .fscrypt registered Oct 13 05:44:35.918960 kernel: Key type fscrypt-provisioning registered Oct 13 05:44:35.918969 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:44:35.918977 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:44:35.918985 kernel: ima: No architecture policies found Oct 13 05:44:35.918993 kernel: clk: Disabling unused clocks Oct 13 05:44:35.919002 kernel: Warning: unable to open an initial console. Oct 13 05:44:35.919010 kernel: Freeing unused kernel image (initmem) memory: 54096K Oct 13 05:44:35.919018 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:44:35.919029 kernel: Freeing unused kernel image (rodata/data gap) memory: 240K Oct 13 05:44:35.919037 kernel: Run /init as init process Oct 13 05:44:35.919046 kernel: with arguments: Oct 13 05:44:35.919054 kernel: /init Oct 13 05:44:35.919062 kernel: with environment: Oct 13 05:44:35.919070 kernel: HOME=/ Oct 13 05:44:35.919078 kernel: TERM=linux Oct 13 05:44:35.919086 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:44:35.919101 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:44:35.919116 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:44:35.919126 systemd[1]: Detected virtualization kvm. Oct 13 05:44:35.919134 systemd[1]: Detected architecture x86-64. Oct 13 05:44:35.919143 systemd[1]: Running in initrd. Oct 13 05:44:35.919151 systemd[1]: No hostname configured, using default hostname. Oct 13 05:44:35.919160 systemd[1]: Hostname set to . Oct 13 05:44:35.919168 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:44:35.919179 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:44:35.919188 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:44:35.919196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:44:35.919206 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:44:35.919214 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:44:35.919223 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:44:35.919232 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:44:35.919244 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 05:44:35.919253 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 05:44:35.919262 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:44:35.919271 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:44:35.919279 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:44:35.919288 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:44:35.919296 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:44:35.919305 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:44:35.919323 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:44:35.919332 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:44:35.919341 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:44:35.919369 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:44:35.919378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:44:35.919386 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:44:35.919395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:44:35.919403 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:44:35.919412 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:44:35.919424 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:44:35.919432 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:44:35.919441 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:44:35.919450 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:44:35.919459 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:44:35.919467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:44:35.919476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:44:35.919485 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:44:35.919498 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:44:35.919538 systemd-journald[220]: Collecting audit messages is disabled. Oct 13 05:44:35.919563 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:44:35.919572 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:44:35.919582 systemd-journald[220]: Journal started Oct 13 05:44:35.919601 systemd-journald[220]: Runtime Journal (/run/log/journal/829652d8038d480f9377e54cd7f527c4) is 6M, max 48.4M, 42.4M free. Oct 13 05:44:35.907347 systemd-modules-load[222]: Inserted module 'overlay' Oct 13 05:44:35.928680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:44:35.933406 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:44:35.937957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:44:35.943444 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:44:35.943469 kernel: Bridge firewalling registered Oct 13 05:44:35.942730 systemd-modules-load[222]: Inserted module 'br_netfilter' Oct 13 05:44:35.943675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:44:35.954542 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:44:35.956167 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:44:35.958735 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:44:35.965590 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:44:35.975331 systemd-tmpfiles[241]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:44:35.979104 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:44:35.982858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:44:35.985322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:44:35.993250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:44:35.996522 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:44:36.000780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:44:36.026211 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:44:36.045749 systemd-resolved[260]: Positive Trust Anchors: Oct 13 05:44:36.045775 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:44:36.045805 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:44:36.048880 systemd-resolved[260]: Defaulting to hostname 'linux'. Oct 13 05:44:36.050377 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:44:36.061200 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:44:36.183445 kernel: SCSI subsystem initialized Oct 13 05:44:36.193398 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:44:36.205399 kernel: iscsi: registered transport (tcp) Oct 13 05:44:36.233851 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:44:36.233952 kernel: QLogic iSCSI HBA Driver Oct 13 05:44:36.262480 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:44:36.283674 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:44:36.289270 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:44:36.387864 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:44:36.390543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:44:36.454400 kernel: raid6: avx2x4 gen() 30209 MB/s Oct 13 05:44:36.494390 kernel: raid6: avx2x2 gen() 31400 MB/s Oct 13 05:44:36.512085 kernel: raid6: avx2x1 gen() 25853 MB/s Oct 13 05:44:36.512111 kernel: raid6: using algorithm avx2x2 gen() 31400 MB/s Oct 13 05:44:36.530108 kernel: raid6: .... xor() 19895 MB/s, rmw enabled Oct 13 05:44:36.530144 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:44:36.553400 kernel: xor: automatically using best checksumming function avx Oct 13 05:44:36.827426 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:44:36.836713 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:44:36.842035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:44:36.881420 systemd-udevd[473]: Using default interface naming scheme 'v255'. Oct 13 05:44:36.887462 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:44:36.893848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:44:36.931418 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Oct 13 05:44:36.962435 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:44:36.964534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:44:37.066330 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:44:37.077170 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:44:37.136131 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:44:37.136808 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:44:37.144422 kernel: libata version 3.00 loaded. Oct 13 05:44:37.150896 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 13 05:44:37.171938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:44:37.182007 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:44:37.182044 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:44:37.182060 kernel: GPT:9289727 != 19775487 Oct 13 05:44:37.182074 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:44:37.182088 kernel: GPT:9289727 != 19775487 Oct 13 05:44:37.182101 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:44:37.182115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:44:37.182129 kernel: AES CTR mode by8 optimization enabled Oct 13 05:44:37.172134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:44:37.182453 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:44:37.184205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:44:37.191220 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:44:37.194981 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:44:37.195217 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:44:37.204585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:44:37.216284 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:44:37.216555 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:44:37.216743 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:44:37.204767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:44:37.286477 kernel: scsi host0: ahci Oct 13 05:44:37.286791 kernel: scsi host1: ahci Oct 13 05:44:37.288216 kernel: scsi host2: ahci Oct 13 05:44:37.289833 kernel: scsi host3: ahci Oct 13 05:44:37.290850 kernel: scsi host4: ahci Oct 13 05:44:37.296662 kernel: scsi host5: ahci Oct 13 05:44:37.296867 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Oct 13 05:44:37.296880 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Oct 13 05:44:37.296892 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Oct 13 05:44:37.300059 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Oct 13 05:44:37.300128 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Oct 13 05:44:37.303335 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Oct 13 05:44:37.309071 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:44:37.319549 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:44:37.329901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:44:37.344484 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:44:37.345136 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 05:44:37.350047 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:44:37.355145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:44:37.391393 disk-uuid[631]: Primary Header is updated. Oct 13 05:44:37.391393 disk-uuid[631]: Secondary Entries is updated. Oct 13 05:44:37.391393 disk-uuid[631]: Secondary Header is updated. Oct 13 05:44:37.397393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:44:37.398544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:44:37.404387 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:44:37.614778 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:44:37.614868 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:44:37.614884 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:44:37.616400 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:44:37.619386 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:44:37.619411 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:44:37.620387 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:44:37.622561 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:44:37.622577 kernel: ata3.00: applying bridge limits Oct 13 05:44:37.624742 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:44:37.624764 kernel: ata3.00: configured for UDMA/100 Oct 13 05:44:37.626397 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:44:37.679137 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:44:37.679554 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:44:37.697386 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:44:37.997964 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:44:38.000568 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:44:38.003716 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:44:38.005671 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:44:38.010510 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:44:38.049786 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:44:38.406396 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:44:38.406470 disk-uuid[634]: The operation has completed successfully. Oct 13 05:44:38.435277 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:44:38.435469 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:44:38.479491 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 05:44:38.504040 sh[665]: Success Oct 13 05:44:38.522588 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:44:38.522623 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:44:38.524379 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:44:38.535382 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Oct 13 05:44:38.571562 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:44:38.577316 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 05:44:38.591532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 05:44:38.600857 kernel: BTRFS: device fsid c8746500-26f5-4ec1-9da8-aef51ec7db92 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (677) Oct 13 05:44:38.600888 kernel: BTRFS info (device dm-0): first mount of filesystem c8746500-26f5-4ec1-9da8-aef51ec7db92 Oct 13 05:44:38.600899 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:44:38.608333 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:44:38.608373 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:44:38.610015 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 05:44:38.612305 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:44:38.615165 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:44:38.616552 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:44:38.619751 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:44:38.645380 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Oct 13 05:44:38.648583 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:44:38.648619 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:44:38.652192 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:44:38.652220 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:44:38.657394 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:44:38.658530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:44:38.660809 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:44:38.799697 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:44:38.808621 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:44:38.830537 ignition[759]: Ignition 2.22.0 Oct 13 05:44:38.830551 ignition[759]: Stage: fetch-offline Oct 13 05:44:38.830603 ignition[759]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:44:38.830614 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:44:38.830714 ignition[759]: parsed url from cmdline: "" Oct 13 05:44:38.830718 ignition[759]: no config URL provided Oct 13 05:44:38.830723 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:44:38.830733 ignition[759]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:44:38.830757 ignition[759]: op(1): [started] loading QEMU firmware config module Oct 13 05:44:38.830763 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:44:38.848638 ignition[759]: op(1): [finished] loading QEMU firmware config module Oct 13 05:44:38.875884 systemd-networkd[851]: lo: Link UP Oct 13 05:44:38.875894 systemd-networkd[851]: lo: Gained carrier Oct 13 05:44:38.878068 systemd-networkd[851]: Enumeration completed Oct 13 05:44:38.878215 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:44:38.878592 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:44:38.878602 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:44:38.879816 systemd-networkd[851]: eth0: Link UP Oct 13 05:44:38.880016 systemd-networkd[851]: eth0: Gained carrier Oct 13 05:44:38.880028 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:44:38.880990 systemd[1]: Reached target network.target - Network. Oct 13 05:44:38.906429 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:44:38.947389 systemd-resolved[260]: Detected conflict on linux IN A 10.0.0.69 Oct 13 05:44:38.947407 systemd-resolved[260]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Oct 13 05:44:38.957281 ignition[759]: parsing config with SHA512: bfad4e6f6d1d1aa43b9639c7ed16469dc15e1d5b89ae345b15cccc3a9f44f120e15f24b9b8a80c8248042d637d359a664b7cdae5ffc9760daf8cd9eb13d69d9f Oct 13 05:44:38.962851 unknown[759]: fetched base config from "system" Oct 13 05:44:38.962862 unknown[759]: fetched user config from "qemu" Oct 13 05:44:38.963301 ignition[759]: fetch-offline: fetch-offline passed Oct 13 05:44:38.963412 ignition[759]: Ignition finished successfully Oct 13 05:44:38.967211 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:44:38.970209 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:44:38.971856 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:44:39.021321 ignition[859]: Ignition 2.22.0 Oct 13 05:44:39.021334 ignition[859]: Stage: kargs Oct 13 05:44:39.021485 ignition[859]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:44:39.021497 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:44:39.022373 ignition[859]: kargs: kargs passed Oct 13 05:44:39.022426 ignition[859]: Ignition finished successfully Oct 13 05:44:39.028565 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:44:39.031304 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:44:39.063994 ignition[867]: Ignition 2.22.0 Oct 13 05:44:39.064006 ignition[867]: Stage: disks Oct 13 05:44:39.064133 ignition[867]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:44:39.064143 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:44:39.065126 ignition[867]: disks: disks passed Oct 13 05:44:39.069077 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:44:39.065191 ignition[867]: Ignition finished successfully Oct 13 05:44:39.072242 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:44:39.075336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:44:39.078509 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:44:39.081860 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:44:39.085435 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:44:39.092005 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:44:39.115868 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 13 05:44:39.123682 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:44:39.128212 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:44:39.239396 kernel: EXT4-fs (vda9): mounted filesystem 8b520359-9763-45f3-b7f7-db1e9fbc640d r/w with ordered data mode. Quota mode: none. Oct 13 05:44:39.240365 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:44:39.241697 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:44:39.245036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:44:39.248813 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:44:39.251117 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:44:39.251161 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:44:39.251187 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:44:39.265091 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:44:39.269666 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Oct 13 05:44:39.269689 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:44:39.267169 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:44:39.276273 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:44:39.280013 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:44:39.280035 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:44:39.282249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:44:39.305373 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:44:39.309591 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:44:39.313518 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:44:39.317887 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:44:39.403460 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:44:39.407934 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:44:39.410046 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:44:39.431391 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:44:39.442527 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:44:39.460380 ignition[1000]: INFO : Ignition 2.22.0 Oct 13 05:44:39.460380 ignition[1000]: INFO : Stage: mount Oct 13 05:44:39.463131 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:44:39.463131 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:44:39.463131 ignition[1000]: INFO : mount: mount passed Oct 13 05:44:39.463131 ignition[1000]: INFO : Ignition finished successfully Oct 13 05:44:39.464058 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:44:39.467711 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:44:39.598048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:44:39.599626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:44:39.633382 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Oct 13 05:44:39.636486 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:44:39.636508 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:44:39.640072 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:44:39.640098 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:44:39.641748 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:44:39.676714 ignition[1029]: INFO : Ignition 2.22.0 Oct 13 05:44:39.676714 ignition[1029]: INFO : Stage: files Oct 13 05:44:39.679635 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:44:39.679635 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:44:39.679635 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:44:39.679635 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:44:39.679635 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:44:39.690383 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:44:39.692742 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:44:39.695467 unknown[1029]: wrote ssh authorized keys file for user: core Oct 13 05:44:39.697348 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:44:39.700987 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:44:39.704473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:44:39.763188 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:44:39.820479 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:44:39.823941 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:44:39.823941 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:44:39.823941 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:44:39.835557 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:44:39.862436 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:44:39.862436 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:44:39.862436 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:44:40.124866 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:44:40.582569 systemd-networkd[851]: eth0: Gained IPv6LL Oct 13 05:44:40.831767 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:44:40.831767 ignition[1029]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:44:40.837883 ignition[1029]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:44:40.841197 ignition[1029]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:44:40.841197 ignition[1029]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:44:40.841197 ignition[1029]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:44:40.841197 ignition[1029]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:44:40.841197 ignition[1029]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:44:40.841197 ignition[1029]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:44:40.841197 ignition[1029]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:44:40.861753 ignition[1029]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:44:40.868319 ignition[1029]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:44:40.870949 ignition[1029]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:44:40.870949 ignition[1029]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:44:40.870949 ignition[1029]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:44:40.870949 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:44:40.870949 ignition[1029]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:44:40.870949 ignition[1029]: INFO : files: files passed Oct 13 05:44:40.870949 ignition[1029]: INFO : Ignition finished successfully Oct 13 05:44:40.874753 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:44:40.878129 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:44:40.885510 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:44:40.901809 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:44:40.901939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:44:40.908225 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:44:40.912511 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:44:40.912511 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:44:40.917654 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:44:40.915208 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:44:40.920113 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:44:40.921858 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:44:40.992251 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:44:40.992442 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:44:40.993603 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:44:40.994194 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:44:40.995377 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:44:40.996346 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:44:41.019634 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:44:41.022591 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:44:41.046757 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:44:41.050482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:44:41.051251 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:44:41.054923 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:44:41.055070 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:44:41.060510 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:44:41.061327 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:44:41.066722 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:44:41.067840 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:44:41.072397 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:44:41.075935 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:44:41.079491 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:44:41.083823 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:44:41.084964 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:44:41.089955 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:44:41.090874 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:44:41.091366 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:44:41.091518 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:44:41.100757 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:44:41.101875 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:44:41.106176 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:44:41.109216 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:44:41.110129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:44:41.110257 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:44:41.118028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:44:41.118178 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:44:41.119024 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:44:41.123223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:44:41.129441 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:44:41.130208 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:44:41.134431 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:44:41.137185 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:44:41.137276 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:44:41.140008 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:44:41.140104 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:44:41.142831 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:44:41.142947 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:44:41.145829 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:44:41.145932 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:44:41.153496 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:44:41.154179 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:44:41.154290 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:44:41.166581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:44:41.168022 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:44:41.168149 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:44:41.171469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:44:41.171631 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:44:41.180856 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:44:41.180985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:44:41.196185 ignition[1086]: INFO : Ignition 2.22.0 Oct 13 05:44:41.196185 ignition[1086]: INFO : Stage: umount Oct 13 05:44:41.199106 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:44:41.199106 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:44:41.199106 ignition[1086]: INFO : umount: umount passed Oct 13 05:44:41.199106 ignition[1086]: INFO : Ignition finished successfully Oct 13 05:44:41.200513 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:44:41.201540 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:44:41.205889 systemd[1]: Stopped target network.target - Network. Oct 13 05:44:41.207151 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:44:41.207221 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:44:41.211329 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:44:41.211397 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:44:41.212199 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:44:41.212254 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:44:41.217034 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:44:41.217081 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:44:41.220428 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:44:41.221131 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:44:41.229526 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:44:41.229658 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:44:41.236927 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 05:44:41.237590 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:44:41.237672 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:44:41.241736 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:44:41.244462 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:44:41.250889 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:44:41.251050 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:44:41.256726 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 05:44:41.256914 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:44:41.257817 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:44:41.257857 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:44:41.259028 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:44:41.264423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:44:41.264504 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:44:41.264975 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:44:41.265028 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:44:41.273901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:44:41.273951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:44:41.274967 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:44:41.283250 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 05:44:41.296614 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:44:41.301556 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:44:41.304600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:44:41.304675 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:44:41.306238 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:44:41.306287 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:44:41.312071 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:44:41.312124 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:44:41.316723 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:44:41.316782 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:44:41.321199 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:44:41.321259 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:44:41.333250 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:44:41.335024 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:44:41.335092 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:44:41.341653 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:44:41.341705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:44:41.347366 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:44:41.347422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:44:41.353313 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:44:41.353457 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:44:41.355325 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:44:41.355448 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:44:41.636183 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:44:41.636340 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:44:41.637522 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:44:41.638067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:44:41.638123 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:44:41.648796 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:44:41.673328 systemd[1]: Switching root. Oct 13 05:44:41.714771 systemd-journald[220]: Journal stopped Oct 13 05:44:43.151221 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Oct 13 05:44:43.151299 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:44:43.151313 kernel: SELinux: policy capability open_perms=1 Oct 13 05:44:43.151328 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:44:43.151339 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:44:43.151367 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:44:43.151379 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:44:43.151398 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:44:43.151411 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:44:43.151431 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:44:43.151450 kernel: audit: type=1403 audit(1760334282.213:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:44:43.151471 systemd[1]: Successfully loaded SELinux policy in 62.364ms. Oct 13 05:44:43.151489 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.819ms. Oct 13 05:44:43.151504 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:44:43.151518 systemd[1]: Detected virtualization kvm. Oct 13 05:44:43.151532 systemd[1]: Detected architecture x86-64. Oct 13 05:44:43.151544 systemd[1]: Detected first boot. Oct 13 05:44:43.151556 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:44:43.151568 zram_generator::config[1131]: No configuration found. Oct 13 05:44:43.151581 kernel: Guest personality initialized and is inactive Oct 13 05:44:43.151600 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:44:43.151611 kernel: Initialized host personality Oct 13 05:44:43.151622 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:44:43.151634 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:44:43.151646 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 05:44:43.151658 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:44:43.151670 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:44:43.151682 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:44:43.151695 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:44:43.151709 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:44:43.151721 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:44:43.151832 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:44:43.151844 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:44:43.151856 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:44:43.151874 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:44:43.151886 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:44:43.151898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:44:43.151910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:44:43.151925 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:44:43.151937 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:44:43.151949 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:44:43.151962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:44:43.151973 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:44:43.151986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:44:43.151998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:44:43.152012 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:44:43.152024 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:44:43.152035 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:44:43.152052 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:44:43.152063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:44:43.152076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:44:43.152088 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:44:43.152114 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:44:43.152127 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:44:43.152139 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:44:43.152155 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:44:43.152167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:44:43.152179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:44:43.152191 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:44:43.152203 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:44:43.152215 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:44:43.152228 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:44:43.152240 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:44:43.152251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:43.152266 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:44:43.152278 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:44:43.152290 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:44:43.152302 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:44:43.152315 systemd[1]: Reached target machines.target - Containers. Oct 13 05:44:43.152328 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:44:43.152340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:44:43.152383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:44:43.152399 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:44:43.152419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:44:43.152430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:44:43.152443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:44:43.152454 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:44:43.152466 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:44:43.152479 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:44:43.152491 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:44:43.152506 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:44:43.152518 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:44:43.152530 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:44:43.152543 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:44:43.152555 kernel: loop: module loaded Oct 13 05:44:43.152566 kernel: fuse: init (API version 7.41) Oct 13 05:44:43.152578 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:44:43.152591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:44:43.152603 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:44:43.152617 kernel: ACPI: bus type drm_connector registered Oct 13 05:44:43.152628 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:44:43.152641 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:44:43.152653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:44:43.152665 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 05:44:43.152687 systemd[1]: Stopped verity-setup.service. Oct 13 05:44:43.152699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:43.152711 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:44:43.152724 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:44:43.152760 systemd-journald[1202]: Collecting audit messages is disabled. Oct 13 05:44:43.152790 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:44:43.152803 systemd-journald[1202]: Journal started Oct 13 05:44:43.152825 systemd-journald[1202]: Runtime Journal (/run/log/journal/829652d8038d480f9377e54cd7f527c4) is 6M, max 48.4M, 42.4M free. Oct 13 05:44:42.782719 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:44:42.804791 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:44:42.805334 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:44:43.158461 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:44:43.161083 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:44:43.163416 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:44:43.165723 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:44:43.167981 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:44:43.170736 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:44:43.171037 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:44:43.173586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:44:43.173887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:44:43.176411 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:44:43.176779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:44:43.178879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:44:43.179156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:44:43.181481 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:44:43.181708 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:44:43.184028 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:44:43.184321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:44:43.186588 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:44:43.188990 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:44:43.191537 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:44:43.193958 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:44:43.196767 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:44:43.214879 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:44:43.218346 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:44:43.221424 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:44:43.223443 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:44:43.223545 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:44:43.226257 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:44:43.230382 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:44:43.232450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:44:43.234975 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:44:43.238438 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:44:43.240457 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:44:43.241698 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:44:43.243441 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:44:43.246455 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:44:43.254559 systemd-journald[1202]: Time spent on flushing to /var/log/journal/829652d8038d480f9377e54cd7f527c4 is 14.166ms for 1068 entries. Oct 13 05:44:43.254559 systemd-journald[1202]: System Journal (/var/log/journal/829652d8038d480f9377e54cd7f527c4) is 8M, max 195.6M, 187.6M free. Oct 13 05:44:43.286650 systemd-journald[1202]: Received client request to flush runtime journal. Oct 13 05:44:43.286697 kernel: loop0: detected capacity change from 0 to 128016 Oct 13 05:44:43.251476 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:44:43.256185 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:44:43.260057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:44:43.263286 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:44:43.265642 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:44:43.273070 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:44:43.275602 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:44:43.281345 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:44:43.289364 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:44:43.292673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:44:43.305379 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:44:43.310613 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:44:43.314465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:44:43.328716 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 05:44:43.332562 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:44:43.352166 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Oct 13 05:44:43.352188 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Oct 13 05:44:43.354465 kernel: loop2: detected capacity change from 0 to 219144 Oct 13 05:44:43.357329 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:44:43.385396 kernel: loop3: detected capacity change from 0 to 128016 Oct 13 05:44:43.450384 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 05:44:43.461419 kernel: loop5: detected capacity change from 0 to 219144 Oct 13 05:44:43.469864 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 13 05:44:43.470485 (sd-merge)[1272]: Merged extensions into '/usr'. Oct 13 05:44:43.477670 systemd[1]: Reload requested from client PID 1250 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:44:43.477689 systemd[1]: Reloading... Oct 13 05:44:43.648383 zram_generator::config[1301]: No configuration found. Oct 13 05:44:43.993449 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:44:43.994635 systemd[1]: Reloading finished in 516 ms. Oct 13 05:44:44.026023 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:44:44.033757 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:44:44.040749 systemd[1]: Starting ensure-sysext.service... Oct 13 05:44:44.043375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:44:44.056968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:44:44.063280 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:44:44.063296 systemd[1]: Reloading... Oct 13 05:44:44.087335 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:44:44.087390 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:44:44.087776 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:44:44.088100 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:44:44.089177 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:44:44.089482 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Oct 13 05:44:44.089557 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Oct 13 05:44:44.094563 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:44:44.094576 systemd-tmpfiles[1335]: Skipping /boot Oct 13 05:44:44.111314 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:44:44.111476 systemd-tmpfiles[1335]: Skipping /boot Oct 13 05:44:44.137468 zram_generator::config[1372]: No configuration found. Oct 13 05:44:44.310400 systemd[1]: Reloading finished in 246 ms. Oct 13 05:44:44.334695 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:44:44.353104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:44:44.362936 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:44:44.366117 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:44:44.385930 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:44:44.391963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:44:44.396574 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:44:44.401217 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:44:44.407047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:44.407622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:44:44.409209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:44:44.412556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:44:44.417570 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:44:44.419546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:44:44.419652 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:44:44.426648 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:44:44.429157 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:44.430554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:44:44.431234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:44:44.435029 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Oct 13 05:44:44.435615 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:44:44.438306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:44:44.438663 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:44:44.441231 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:44:44.442630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:44:44.454931 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:44.455162 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:44:44.457761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:44:44.461567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:44:44.467766 augenrules[1437]: No rules Oct 13 05:44:44.470894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:44:44.472693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:44:44.472822 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:44:44.475632 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:44:44.477482 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:44.478600 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:44:44.481376 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:44:44.481646 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:44:44.484095 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:44:44.486884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:44:44.487131 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:44:44.489913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:44:44.490156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:44:44.493098 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:44:44.493319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:44:44.495688 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:44:44.517665 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:44:44.520632 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:44:44.536043 systemd[1]: Finished ensure-sysext.service. Oct 13 05:44:44.543763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:44.545146 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:44:44.546825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:44:44.548029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:44:44.558481 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:44:44.561969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:44:44.567528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:44:44.570581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:44:44.570631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:44:44.574124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:44:44.579386 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:44:44.581420 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:44:44.581450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:44:44.582043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:44:44.589760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:44:44.594533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:44:44.597338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:44:44.606763 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:44:44.607746 augenrules[1485]: /sbin/augenrules: No change Oct 13 05:44:44.611726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:44:44.620291 augenrules[1515]: No rules Oct 13 05:44:44.621500 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:44:44.621942 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:44:44.624116 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:44:44.624444 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:44:44.630063 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:44:44.632773 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:44:44.632909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:44:44.670958 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:44:44.676666 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:44:44.694462 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:44:44.706482 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:44:44.713431 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:44:44.720446 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:44:44.738493 systemd-resolved[1405]: Positive Trust Anchors: Oct 13 05:44:44.738509 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:44:44.738540 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:44:44.745250 systemd-resolved[1405]: Defaulting to hostname 'linux'. Oct 13 05:44:44.747299 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:44:44.749324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:44:44.756988 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 13 05:44:44.757259 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:44:44.760378 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:44:44.843710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:44:44.868827 systemd-networkd[1491]: lo: Link UP Oct 13 05:44:44.868841 systemd-networkd[1491]: lo: Gained carrier Oct 13 05:44:44.873130 systemd-networkd[1491]: Enumeration completed Oct 13 05:44:44.873278 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:44:44.873737 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:44:44.873752 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:44:44.874509 systemd-networkd[1491]: eth0: Link UP Oct 13 05:44:44.874737 systemd-networkd[1491]: eth0: Gained carrier Oct 13 05:44:44.874763 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:44:44.876116 systemd[1]: Reached target network.target - Network. Oct 13 05:44:44.917464 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:44:44.954443 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:44:44.958165 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:44:44.961529 systemd-networkd[1491]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:44:44.961728 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:44:44.962276 systemd-timesyncd[1495]: Network configuration changed, trying to establish connection. Oct 13 05:44:45.732598 systemd-timesyncd[1495]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:44:45.732647 systemd-timesyncd[1495]: Initial clock synchronization to Mon 2025-10-13 05:44:45.732516 UTC. Oct 13 05:44:45.734634 systemd-resolved[1405]: Clock change detected. Flushing caches. Oct 13 05:44:45.753073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:44:45.755426 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:44:45.757564 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:44:45.759914 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:44:45.763966 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:44:45.766495 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:44:45.768527 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:44:45.770951 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:44:45.773963 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:44:45.774012 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:44:45.777120 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:44:45.780339 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:44:45.786414 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:44:45.793524 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:44:45.798169 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:44:45.800262 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:44:45.809918 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:44:45.813434 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:44:45.818050 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:44:45.820508 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:44:45.824864 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:44:45.826703 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:44:45.828554 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:44:45.828606 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:44:45.834122 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:44:45.837516 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:44:45.841815 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:44:45.844796 kernel: kvm_amd: TSC scaling supported Oct 13 05:44:45.844832 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:44:45.844846 kernel: kvm_amd: Nested Paging enabled Oct 13 05:44:45.844875 kernel: kvm_amd: LBR virtualization supported Oct 13 05:44:45.849769 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:44:45.849819 kernel: kvm_amd: Virtual GIF supported Oct 13 05:44:45.854864 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:44:45.859198 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:44:45.861093 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:44:45.863510 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:44:45.866857 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:44:45.869506 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:44:45.872981 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:44:45.876227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:44:45.884989 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:44:45.885276 jq[1563]: false Oct 13 05:44:45.887736 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:44:45.888331 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:44:45.889217 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:44:45.894868 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:44:45.897073 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Oct 13 05:44:45.898192 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Oct 13 05:44:45.946665 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Oct 13 05:44:45.946665 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:44:45.946665 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Oct 13 05:44:45.946263 oslogin_cache_refresh[1565]: Failure getting users, quitting Oct 13 05:44:45.946283 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:44:45.946328 oslogin_cache_refresh[1565]: Refreshing group entry cache Oct 13 05:44:45.948661 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:44:45.952619 extend-filesystems[1564]: Found /dev/vda6 Oct 13 05:44:45.953012 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:44:45.954488 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:44:45.955083 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:44:45.955348 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:44:45.955581 jq[1576]: true Oct 13 05:44:45.957276 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Oct 13 05:44:45.957269 oslogin_cache_refresh[1565]: Failure getting groups, quitting Oct 13 05:44:45.957336 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:44:45.957287 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:44:45.959095 extend-filesystems[1564]: Found /dev/vda9 Oct 13 05:44:45.960669 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:44:45.961513 extend-filesystems[1564]: Checking size of /dev/vda9 Oct 13 05:44:45.966218 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:44:45.969680 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:44:45.969992 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:44:45.995880 update_engine[1575]: I20251013 05:44:45.990033 1575 main.cc:92] Flatcar Update Engine starting Oct 13 05:44:45.987101 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:44:45.996677 extend-filesystems[1564]: Resized partition /dev/vda9 Oct 13 05:44:45.999240 extend-filesystems[1603]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:44:46.001784 jq[1590]: true Oct 13 05:44:46.006772 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:44:46.011256 tar[1589]: linux-amd64/LICENSE Oct 13 05:44:46.011256 tar[1589]: linux-amd64/helm Oct 13 05:44:46.042775 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 13 05:44:46.055464 dbus-daemon[1560]: [system] SELinux support is enabled Oct 13 05:44:46.055700 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:44:46.063769 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:44:46.063799 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:44:46.068128 systemd-logind[1573]: New seat seat0. Oct 13 05:44:46.070992 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:44:46.071043 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:44:46.075491 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:44:46.084194 update_engine[1575]: I20251013 05:44:46.079183 1575 update_check_scheduler.cc:74] Next update check in 6m12s Oct 13 05:44:46.075526 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:44:46.078647 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:44:46.081653 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:44:46.091087 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:44:46.119204 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 13 05:44:46.149325 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:44:46.149325 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:44:46.149325 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 13 05:44:46.155580 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Oct 13 05:44:46.150505 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:44:46.154533 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:44:46.156850 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:44:46.161807 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:44:46.166335 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:44:46.180065 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:44:46.280096 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:44:46.388018 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:44:46.395104 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:44:46.405589 containerd[1592]: time="2025-10-13T05:44:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:44:46.406300 containerd[1592]: time="2025-10-13T05:44:46.406264089Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:44:46.414042 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:44:46.414416 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:44:46.419102 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:44:46.421872 containerd[1592]: time="2025-10-13T05:44:46.421816500Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.855µs" Oct 13 05:44:46.421872 containerd[1592]: time="2025-10-13T05:44:46.421853028Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:44:46.421929 containerd[1592]: time="2025-10-13T05:44:46.421885369Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:44:46.422119 containerd[1592]: time="2025-10-13T05:44:46.422089331Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:44:46.422153 containerd[1592]: time="2025-10-13T05:44:46.422128214Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:44:46.422180 containerd[1592]: time="2025-10-13T05:44:46.422159543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:44:46.422266 containerd[1592]: time="2025-10-13T05:44:46.422240866Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:44:46.422266 containerd[1592]: time="2025-10-13T05:44:46.422259360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:44:46.422605 containerd[1592]: time="2025-10-13T05:44:46.422576505Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:44:46.422605 containerd[1592]: time="2025-10-13T05:44:46.422598075Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:44:46.422664 containerd[1592]: time="2025-10-13T05:44:46.422610489Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:44:46.422664 containerd[1592]: time="2025-10-13T05:44:46.422618764Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:44:46.422770 containerd[1592]: time="2025-10-13T05:44:46.422733239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:44:46.423068 containerd[1592]: time="2025-10-13T05:44:46.423040234Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:44:46.423099 containerd[1592]: time="2025-10-13T05:44:46.423084147Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:44:46.423099 containerd[1592]: time="2025-10-13T05:44:46.423094526Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:44:46.423183 containerd[1592]: time="2025-10-13T05:44:46.423161111Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:44:46.423473 containerd[1592]: time="2025-10-13T05:44:46.423446266Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:44:46.423558 containerd[1592]: time="2025-10-13T05:44:46.423533509Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:44:46.429438 containerd[1592]: time="2025-10-13T05:44:46.429403506Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:44:46.429483 containerd[1592]: time="2025-10-13T05:44:46.429461574Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:44:46.429483 containerd[1592]: time="2025-10-13T05:44:46.429477655Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:44:46.429522 containerd[1592]: time="2025-10-13T05:44:46.429489387Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:44:46.429522 containerd[1592]: time="2025-10-13T05:44:46.429502331Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:44:46.429522 containerd[1592]: time="2025-10-13T05:44:46.429514704Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:44:46.429588 containerd[1592]: time="2025-10-13T05:44:46.429529482Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:44:46.429588 containerd[1592]: time="2025-10-13T05:44:46.429573855Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:44:46.429588 containerd[1592]: time="2025-10-13T05:44:46.429586629Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:44:46.429641 containerd[1592]: time="2025-10-13T05:44:46.429600184Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:44:46.429641 containerd[1592]: time="2025-10-13T05:44:46.429609903Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:44:46.429641 containerd[1592]: time="2025-10-13T05:44:46.429622286Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:44:46.429773 containerd[1592]: time="2025-10-13T05:44:46.429738914Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:44:46.429800 containerd[1592]: time="2025-10-13T05:44:46.429783698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:44:46.429820 containerd[1592]: time="2025-10-13T05:44:46.429798997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:44:46.429820 containerd[1592]: time="2025-10-13T05:44:46.429810298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:44:46.429857 containerd[1592]: time="2025-10-13T05:44:46.429820327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:44:46.429857 containerd[1592]: time="2025-10-13T05:44:46.429831418Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:44:46.429857 containerd[1592]: time="2025-10-13T05:44:46.429842459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:44:46.429857 containerd[1592]: time="2025-10-13T05:44:46.429851986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:44:46.429944 containerd[1592]: time="2025-10-13T05:44:46.429865321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:44:46.429944 containerd[1592]: time="2025-10-13T05:44:46.429875390Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:44:46.429944 containerd[1592]: time="2025-10-13T05:44:46.429887924Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:44:46.430003 containerd[1592]: time="2025-10-13T05:44:46.429973023Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:44:46.430003 containerd[1592]: time="2025-10-13T05:44:46.429999603Z" level=info msg="Start snapshots syncer" Oct 13 05:44:46.430067 containerd[1592]: time="2025-10-13T05:44:46.430048134Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:44:46.434541 containerd[1592]: time="2025-10-13T05:44:46.433958556Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:44:46.434541 containerd[1592]: time="2025-10-13T05:44:46.434051020Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434230115Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434571986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434595180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434604908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434614606Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434647398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434659661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434669499Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434740663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434780437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:44:46.434801 containerd[1592]: time="2025-10-13T05:44:46.434792891Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:44:46.434998 containerd[1592]: time="2025-10-13T05:44:46.434882819Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:44:46.434998 containerd[1592]: time="2025-10-13T05:44:46.434899861Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:44:46.434998 containerd[1592]: time="2025-10-13T05:44:46.434989559Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:44:46.435054 containerd[1592]: time="2025-10-13T05:44:46.435000670Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:44:46.435054 containerd[1592]: time="2025-10-13T05:44:46.435010990Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:44:46.435054 containerd[1592]: time="2025-10-13T05:44:46.435034814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:44:46.435113 containerd[1592]: time="2025-10-13T05:44:46.435065632Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:44:46.435113 containerd[1592]: time="2025-10-13T05:44:46.435091150Z" level=info msg="runtime interface created" Oct 13 05:44:46.435113 containerd[1592]: time="2025-10-13T05:44:46.435096670Z" level=info msg="created NRI interface" Oct 13 05:44:46.435113 containerd[1592]: time="2025-10-13T05:44:46.435105056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:44:46.435184 containerd[1592]: time="2025-10-13T05:44:46.435146063Z" level=info msg="Connect containerd service" Oct 13 05:44:46.435184 containerd[1592]: time="2025-10-13T05:44:46.435170909Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:44:46.436410 containerd[1592]: time="2025-10-13T05:44:46.436375358Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:44:46.490215 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:44:46.494196 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:44:46.497202 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:44:46.499464 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:44:46.576664 tar[1589]: linux-amd64/README.md Oct 13 05:44:46.631165 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:44:46.742379 containerd[1592]: time="2025-10-13T05:44:46.742216189Z" level=info msg="Start subscribing containerd event" Oct 13 05:44:46.742500 containerd[1592]: time="2025-10-13T05:44:46.742337396Z" level=info msg="Start recovering state" Oct 13 05:44:46.742682 containerd[1592]: time="2025-10-13T05:44:46.742653159Z" level=info msg="Start event monitor" Oct 13 05:44:46.742808 containerd[1592]: time="2025-10-13T05:44:46.742722849Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:44:46.742834 containerd[1592]: time="2025-10-13T05:44:46.742818949Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:44:46.742988 containerd[1592]: time="2025-10-13T05:44:46.742966586Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:44:46.742988 containerd[1592]: time="2025-10-13T05:44:46.742987435Z" level=info msg="Start streaming server" Oct 13 05:44:46.743034 containerd[1592]: time="2025-10-13T05:44:46.743010358Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:44:46.743034 containerd[1592]: time="2025-10-13T05:44:46.743020587Z" level=info msg="runtime interface starting up..." Oct 13 05:44:46.743034 containerd[1592]: time="2025-10-13T05:44:46.743029995Z" level=info msg="starting plugins..." Oct 13 05:44:46.743094 containerd[1592]: time="2025-10-13T05:44:46.743054591Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:44:46.743463 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:44:46.745324 containerd[1592]: time="2025-10-13T05:44:46.744195921Z" level=info msg="containerd successfully booted in 0.339513s" Oct 13 05:44:47.495999 systemd-networkd[1491]: eth0: Gained IPv6LL Oct 13 05:44:47.499581 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:44:47.502593 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:44:47.506380 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:44:47.509820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:44:47.513215 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:44:47.545983 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:44:47.548489 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:44:47.548820 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:44:47.551645 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:44:48.503705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:44:48.506293 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:44:48.508214 systemd[1]: Startup finished in 4.043s (kernel) + 6.545s (initrd) + 5.586s (userspace) = 16.175s. Oct 13 05:44:48.517197 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:44:49.172984 kubelet[1695]: E1013 05:44:49.172879 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:44:49.177301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:44:49.177513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:44:49.177948 systemd[1]: kubelet.service: Consumed 1.465s CPU time, 257.9M memory peak. Oct 13 05:44:49.872347 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:44:49.873550 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:40114.service - OpenSSH per-connection server daemon (10.0.0.1:40114). Oct 13 05:44:49.943115 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 40114 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:44:49.944790 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:49.951543 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:44:49.952637 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:44:49.959885 systemd-logind[1573]: New session 1 of user core. Oct 13 05:44:49.976441 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:44:49.979722 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:44:49.997139 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:44:49.999598 systemd-logind[1573]: New session c1 of user core. Oct 13 05:44:50.148029 systemd[1713]: Queued start job for default target default.target. Oct 13 05:44:50.160045 systemd[1713]: Created slice app.slice - User Application Slice. Oct 13 05:44:50.160071 systemd[1713]: Reached target paths.target - Paths. Oct 13 05:44:50.160115 systemd[1713]: Reached target timers.target - Timers. Oct 13 05:44:50.161731 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:44:50.173023 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:44:50.173150 systemd[1713]: Reached target sockets.target - Sockets. Oct 13 05:44:50.173191 systemd[1713]: Reached target basic.target - Basic System. Oct 13 05:44:50.173231 systemd[1713]: Reached target default.target - Main User Target. Oct 13 05:44:50.173262 systemd[1713]: Startup finished in 165ms. Oct 13 05:44:50.173479 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:44:50.175067 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:44:50.238358 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:40130.service - OpenSSH per-connection server daemon (10.0.0.1:40130). Oct 13 05:44:50.289729 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 40130 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:44:50.291406 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:50.295783 systemd-logind[1573]: New session 2 of user core. Oct 13 05:44:50.305898 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:44:50.359603 sshd[1727]: Connection closed by 10.0.0.1 port 40130 Oct 13 05:44:50.359927 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:50.369167 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:40130.service: Deactivated successfully. Oct 13 05:44:50.370855 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:44:50.371669 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:44:50.374124 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:40134.service - OpenSSH per-connection server daemon (10.0.0.1:40134). Oct 13 05:44:50.374813 systemd-logind[1573]: Removed session 2. Oct 13 05:44:50.426615 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 40134 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:44:50.427868 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:50.432603 systemd-logind[1573]: New session 3 of user core. Oct 13 05:44:50.443906 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:44:50.494178 sshd[1736]: Connection closed by 10.0.0.1 port 40134 Oct 13 05:44:50.494628 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:50.513674 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:40134.service: Deactivated successfully. Oct 13 05:44:50.515773 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:44:50.516682 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:44:50.519317 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:40146.service - OpenSSH per-connection server daemon (10.0.0.1:40146). Oct 13 05:44:50.519937 systemd-logind[1573]: Removed session 3. Oct 13 05:44:50.580982 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 40146 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:44:50.582538 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:50.586721 systemd-logind[1573]: New session 4 of user core. Oct 13 05:44:50.597881 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:44:50.652695 sshd[1745]: Connection closed by 10.0.0.1 port 40146 Oct 13 05:44:50.653090 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:50.663833 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:40146.service: Deactivated successfully. Oct 13 05:44:50.666718 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:44:50.667861 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:44:50.672129 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:40162.service - OpenSSH per-connection server daemon (10.0.0.1:40162). Oct 13 05:44:50.672872 systemd-logind[1573]: Removed session 4. Oct 13 05:44:50.719687 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 40162 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:44:50.720948 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:50.725335 systemd-logind[1573]: New session 5 of user core. Oct 13 05:44:50.735884 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:44:50.806801 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:44:50.807107 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:44:50.827655 sudo[1755]: pam_unix(sudo:session): session closed for user root Oct 13 05:44:50.829156 sshd[1754]: Connection closed by 10.0.0.1 port 40162 Oct 13 05:44:50.829560 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:50.842637 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:40162.service: Deactivated successfully. Oct 13 05:44:50.845222 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:44:50.846260 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:44:50.851264 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:40176.service - OpenSSH per-connection server daemon (10.0.0.1:40176). Oct 13 05:44:50.851976 systemd-logind[1573]: Removed session 5. Oct 13 05:44:50.917149 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 40176 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:44:50.918662 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:50.923945 systemd-logind[1573]: New session 6 of user core. Oct 13 05:44:50.937877 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:44:50.993270 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:44:50.993588 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:44:51.000999 sudo[1766]: pam_unix(sudo:session): session closed for user root Oct 13 05:44:51.007376 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:44:51.007682 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:44:51.018498 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:44:51.073248 augenrules[1788]: No rules Oct 13 05:44:51.074980 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:44:51.075326 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:44:51.076494 sudo[1765]: pam_unix(sudo:session): session closed for user root Oct 13 05:44:51.078348 sshd[1764]: Connection closed by 10.0.0.1 port 40176 Oct 13 05:44:51.078637 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Oct 13 05:44:51.087344 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:40176.service: Deactivated successfully. Oct 13 05:44:51.089251 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:44:51.090167 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:44:51.093106 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:40192.service - OpenSSH per-connection server daemon (10.0.0.1:40192). Oct 13 05:44:51.093857 systemd-logind[1573]: Removed session 6. Oct 13 05:44:51.156091 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 40192 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:44:51.157846 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:44:51.162363 systemd-logind[1573]: New session 7 of user core. Oct 13 05:44:51.175879 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:44:51.229588 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:44:51.229910 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:44:51.730608 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:44:51.753135 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:44:52.334402 dockerd[1821]: time="2025-10-13T05:44:52.334299564Z" level=info msg="Starting up" Oct 13 05:44:52.335386 dockerd[1821]: time="2025-10-13T05:44:52.335356185Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:44:52.360444 dockerd[1821]: time="2025-10-13T05:44:52.360379263Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:44:52.817691 dockerd[1821]: time="2025-10-13T05:44:52.817615010Z" level=info msg="Loading containers: start." Oct 13 05:44:52.830784 kernel: Initializing XFRM netlink socket Oct 13 05:44:53.407624 systemd-networkd[1491]: docker0: Link UP Oct 13 05:44:53.413427 dockerd[1821]: time="2025-10-13T05:44:53.413379148Z" level=info msg="Loading containers: done." Oct 13 05:44:53.433653 dockerd[1821]: time="2025-10-13T05:44:53.433584263Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:44:53.433898 dockerd[1821]: time="2025-10-13T05:44:53.433699278Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:44:53.433898 dockerd[1821]: time="2025-10-13T05:44:53.433863206Z" level=info msg="Initializing buildkit" Oct 13 05:44:53.467778 dockerd[1821]: time="2025-10-13T05:44:53.467707594Z" level=info msg="Completed buildkit initialization" Oct 13 05:44:53.474192 dockerd[1821]: time="2025-10-13T05:44:53.474142169Z" level=info msg="Daemon has completed initialization" Oct 13 05:44:53.474334 dockerd[1821]: time="2025-10-13T05:44:53.474272192Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:44:53.474454 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:44:54.137938 containerd[1592]: time="2025-10-13T05:44:54.137814043Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:44:55.121923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4148889436.mount: Deactivated successfully. Oct 13 05:44:56.325821 containerd[1592]: time="2025-10-13T05:44:56.325721920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:56.326458 containerd[1592]: time="2025-10-13T05:44:56.326406574Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 05:44:56.327957 containerd[1592]: time="2025-10-13T05:44:56.327904602Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:56.330569 containerd[1592]: time="2025-10-13T05:44:56.330521669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:56.331642 containerd[1592]: time="2025-10-13T05:44:56.331601434Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.193635215s" Oct 13 05:44:56.331642 containerd[1592]: time="2025-10-13T05:44:56.331640688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:44:56.332376 containerd[1592]: time="2025-10-13T05:44:56.332344207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:44:57.476295 containerd[1592]: time="2025-10-13T05:44:57.476218751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:57.476912 containerd[1592]: time="2025-10-13T05:44:57.476861236Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 05:44:57.478078 containerd[1592]: time="2025-10-13T05:44:57.478024096Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:57.481337 containerd[1592]: time="2025-10-13T05:44:57.481293436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:57.482204 containerd[1592]: time="2025-10-13T05:44:57.482177704Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.149797711s" Oct 13 05:44:57.482258 containerd[1592]: time="2025-10-13T05:44:57.482206899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:44:57.482861 containerd[1592]: time="2025-10-13T05:44:57.482832793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:44:58.616546 containerd[1592]: time="2025-10-13T05:44:58.616476584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:58.617301 containerd[1592]: time="2025-10-13T05:44:58.617229386Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 05:44:58.618646 containerd[1592]: time="2025-10-13T05:44:58.618596659Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:58.621720 containerd[1592]: time="2025-10-13T05:44:58.621663951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:44:58.622994 containerd[1592]: time="2025-10-13T05:44:58.622955202Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.140094177s" Oct 13 05:44:58.623041 containerd[1592]: time="2025-10-13T05:44:58.623004604Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:44:58.623593 containerd[1592]: time="2025-10-13T05:44:58.623563803Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:44:59.339339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:44:59.341308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:44:59.748315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:44:59.752848 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:45:00.015034 kubelet[2115]: E1013 05:45:00.014831 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:45:00.021267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:45:00.021464 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:45:00.021892 systemd[1]: kubelet.service: Consumed 315ms CPU time, 110.5M memory peak. Oct 13 05:45:00.847044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount922848677.mount: Deactivated successfully. Oct 13 05:45:01.126135 containerd[1592]: time="2025-10-13T05:45:01.125986795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:01.126731 containerd[1592]: time="2025-10-13T05:45:01.126674064Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 05:45:01.128284 containerd[1592]: time="2025-10-13T05:45:01.128255068Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:01.130247 containerd[1592]: time="2025-10-13T05:45:01.130210044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:01.130764 containerd[1592]: time="2025-10-13T05:45:01.130707086Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.507111152s" Oct 13 05:45:01.130804 containerd[1592]: time="2025-10-13T05:45:01.130772148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:45:01.131359 containerd[1592]: time="2025-10-13T05:45:01.131324003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:45:01.714593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607890248.mount: Deactivated successfully. Oct 13 05:45:03.324346 containerd[1592]: time="2025-10-13T05:45:03.324266579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:03.325138 containerd[1592]: time="2025-10-13T05:45:03.325058214Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 05:45:03.326300 containerd[1592]: time="2025-10-13T05:45:03.326248566Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:03.329735 containerd[1592]: time="2025-10-13T05:45:03.329680320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:03.330879 containerd[1592]: time="2025-10-13T05:45:03.330834795Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.199475816s" Oct 13 05:45:03.330964 containerd[1592]: time="2025-10-13T05:45:03.330878927Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:45:03.331721 containerd[1592]: time="2025-10-13T05:45:03.331501475Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:45:03.993483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161394006.mount: Deactivated successfully. Oct 13 05:45:03.999852 containerd[1592]: time="2025-10-13T05:45:03.999735283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:04.001210 containerd[1592]: time="2025-10-13T05:45:04.001009352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 05:45:04.002805 containerd[1592]: time="2025-10-13T05:45:04.002727734Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:04.005037 containerd[1592]: time="2025-10-13T05:45:04.004990056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:04.005532 containerd[1592]: time="2025-10-13T05:45:04.005489572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 673.953002ms" Oct 13 05:45:04.005532 containerd[1592]: time="2025-10-13T05:45:04.005524758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:45:04.006088 containerd[1592]: time="2025-10-13T05:45:04.006069149Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:45:06.734137 containerd[1592]: time="2025-10-13T05:45:06.734022541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:06.734814 containerd[1592]: time="2025-10-13T05:45:06.734721652Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 05:45:06.735953 containerd[1592]: time="2025-10-13T05:45:06.735919438Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:06.738650 containerd[1592]: time="2025-10-13T05:45:06.738599362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:06.739848 containerd[1592]: time="2025-10-13T05:45:06.739797329Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.733702421s" Oct 13 05:45:06.739890 containerd[1592]: time="2025-10-13T05:45:06.739850228Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:45:09.769281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:45:09.769445 systemd[1]: kubelet.service: Consumed 315ms CPU time, 110.5M memory peak. Oct 13 05:45:09.771659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:45:09.801441 systemd[1]: Reload requested from client PID 2261 ('systemctl') (unit session-7.scope)... Oct 13 05:45:09.801472 systemd[1]: Reloading... Oct 13 05:45:09.915794 zram_generator::config[2307]: No configuration found. Oct 13 05:45:10.248497 systemd[1]: Reloading finished in 446 ms. Oct 13 05:45:10.313388 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:45:10.313488 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:45:10.313795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:45:10.313839 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.2M memory peak. Oct 13 05:45:10.315393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:45:10.503891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:45:10.514056 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:45:10.557547 kubelet[2352]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:45:10.557547 kubelet[2352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:45:10.558045 kubelet[2352]: I1013 05:45:10.557593 2352 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:45:10.925383 kubelet[2352]: I1013 05:45:10.925340 2352 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:45:10.925383 kubelet[2352]: I1013 05:45:10.925367 2352 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:45:10.928396 kubelet[2352]: I1013 05:45:10.928366 2352 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:45:10.928396 kubelet[2352]: I1013 05:45:10.928387 2352 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:45:10.928656 kubelet[2352]: I1013 05:45:10.928629 2352 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:45:11.209479 kubelet[2352]: E1013 05:45:11.209358 2352 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:45:11.209593 kubelet[2352]: I1013 05:45:11.209494 2352 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:45:11.215768 kubelet[2352]: I1013 05:45:11.213362 2352 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:45:11.218923 kubelet[2352]: I1013 05:45:11.218898 2352 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:45:11.219907 kubelet[2352]: I1013 05:45:11.219870 2352 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:45:11.220046 kubelet[2352]: I1013 05:45:11.219893 2352 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:45:11.220150 kubelet[2352]: I1013 05:45:11.220056 2352 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:45:11.220150 kubelet[2352]: I1013 05:45:11.220065 2352 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:45:11.220194 kubelet[2352]: I1013 05:45:11.220172 2352 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:45:11.223515 kubelet[2352]: I1013 05:45:11.223480 2352 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:45:11.223707 kubelet[2352]: I1013 05:45:11.223676 2352 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:45:11.223707 kubelet[2352]: I1013 05:45:11.223701 2352 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:45:11.223803 kubelet[2352]: I1013 05:45:11.223730 2352 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:45:11.223803 kubelet[2352]: I1013 05:45:11.223777 2352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:45:11.224592 kubelet[2352]: E1013 05:45:11.224534 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:45:11.224659 kubelet[2352]: E1013 05:45:11.224634 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:45:11.227454 kubelet[2352]: I1013 05:45:11.227433 2352 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:45:11.232757 kubelet[2352]: I1013 05:45:11.231640 2352 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:45:11.232757 kubelet[2352]: I1013 05:45:11.231684 2352 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:45:11.232757 kubelet[2352]: W1013 05:45:11.231804 2352 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:45:11.236287 kubelet[2352]: I1013 05:45:11.236264 2352 server.go:1262] "Started kubelet" Oct 13 05:45:11.236798 kubelet[2352]: I1013 05:45:11.236739 2352 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:45:11.236868 kubelet[2352]: I1013 05:45:11.236817 2352 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:45:11.237159 kubelet[2352]: I1013 05:45:11.237135 2352 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:45:11.237229 kubelet[2352]: I1013 05:45:11.237201 2352 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:45:11.237482 kubelet[2352]: I1013 05:45:11.237459 2352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:45:11.238049 kubelet[2352]: I1013 05:45:11.238031 2352 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:45:11.239396 kubelet[2352]: I1013 05:45:11.239379 2352 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:45:11.239767 kubelet[2352]: E1013 05:45:11.239725 2352 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:45:11.240270 kubelet[2352]: I1013 05:45:11.240247 2352 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:45:11.240788 kubelet[2352]: I1013 05:45:11.240768 2352 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:45:11.240941 kubelet[2352]: I1013 05:45:11.240924 2352 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:45:11.241100 kubelet[2352]: E1013 05:45:11.241065 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" Oct 13 05:45:11.241500 kubelet[2352]: E1013 05:45:11.241472 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:45:11.242038 kubelet[2352]: I1013 05:45:11.242004 2352 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:45:11.242136 kubelet[2352]: I1013 05:45:11.242110 2352 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:45:11.243292 kubelet[2352]: E1013 05:45:11.241625 2352 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df6babb2ac2db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:45:11.236223707 +0000 UTC m=+0.718631996,LastTimestamp:2025-10-13 05:45:11.236223707 +0000 UTC m=+0.718631996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:45:11.243467 kubelet[2352]: I1013 05:45:11.243446 2352 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:45:11.244484 kubelet[2352]: E1013 05:45:11.244456 2352 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:45:11.255502 kubelet[2352]: I1013 05:45:11.255473 2352 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:45:11.255502 kubelet[2352]: I1013 05:45:11.255495 2352 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:45:11.255502 kubelet[2352]: I1013 05:45:11.255513 2352 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:45:11.258763 kubelet[2352]: I1013 05:45:11.258726 2352 policy_none.go:49] "None policy: Start" Oct 13 05:45:11.258763 kubelet[2352]: I1013 05:45:11.258760 2352 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:45:11.258851 kubelet[2352]: I1013 05:45:11.258774 2352 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:45:11.262806 kubelet[2352]: I1013 05:45:11.262783 2352 policy_none.go:47] "Start" Oct 13 05:45:11.265393 kubelet[2352]: I1013 05:45:11.265180 2352 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:45:11.266832 kubelet[2352]: I1013 05:45:11.266806 2352 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:45:11.266893 kubelet[2352]: I1013 05:45:11.266845 2352 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:45:11.266925 kubelet[2352]: I1013 05:45:11.266904 2352 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:45:11.266961 kubelet[2352]: E1013 05:45:11.266940 2352 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:45:11.267685 kubelet[2352]: E1013 05:45:11.267648 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:45:11.268413 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:45:11.278045 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:45:11.281678 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:45:11.300643 kubelet[2352]: E1013 05:45:11.300587 2352 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:45:11.300883 kubelet[2352]: I1013 05:45:11.300864 2352 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:45:11.300937 kubelet[2352]: I1013 05:45:11.300882 2352 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:45:11.301472 kubelet[2352]: I1013 05:45:11.301401 2352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:45:11.302418 kubelet[2352]: E1013 05:45:11.302386 2352 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:45:11.302504 kubelet[2352]: E1013 05:45:11.302444 2352 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:45:11.382024 systemd[1]: Created slice kubepods-burstable-pode5bc73bf287cfcc629b403efbf5492a8.slice - libcontainer container kubepods-burstable-pode5bc73bf287cfcc629b403efbf5492a8.slice. Oct 13 05:45:11.394634 kubelet[2352]: E1013 05:45:11.394582 2352 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:45:11.396703 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 05:45:11.402626 kubelet[2352]: I1013 05:45:11.402594 2352 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:45:11.403046 kubelet[2352]: E1013 05:45:11.403020 2352 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Oct 13 05:45:11.410031 kubelet[2352]: E1013 05:45:11.410004 2352 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:45:11.412880 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 05:45:11.414726 kubelet[2352]: E1013 05:45:11.414698 2352 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:45:11.442279 kubelet[2352]: E1013 05:45:11.442241 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" Oct 13 05:45:11.542702 kubelet[2352]: I1013 05:45:11.542555 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5bc73bf287cfcc629b403efbf5492a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e5bc73bf287cfcc629b403efbf5492a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:11.542702 kubelet[2352]: I1013 05:45:11.542595 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:11.542702 kubelet[2352]: I1013 05:45:11.542616 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:11.542702 kubelet[2352]: I1013 05:45:11.542633 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:11.542702 kubelet[2352]: I1013 05:45:11.542652 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:11.544002 kubelet[2352]: I1013 05:45:11.542671 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:11.544002 kubelet[2352]: I1013 05:45:11.542702 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:11.544002 kubelet[2352]: I1013 05:45:11.542784 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5bc73bf287cfcc629b403efbf5492a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5bc73bf287cfcc629b403efbf5492a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:11.544002 kubelet[2352]: I1013 05:45:11.542818 2352 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5bc73bf287cfcc629b403efbf5492a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5bc73bf287cfcc629b403efbf5492a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:11.604834 kubelet[2352]: I1013 05:45:11.604798 2352 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:45:11.605199 kubelet[2352]: E1013 05:45:11.605117 2352 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Oct 13 05:45:11.699644 containerd[1592]: time="2025-10-13T05:45:11.699590580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e5bc73bf287cfcc629b403efbf5492a8,Namespace:kube-system,Attempt:0,}" Oct 13 05:45:11.713952 containerd[1592]: time="2025-10-13T05:45:11.713883729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:45:11.718056 containerd[1592]: time="2025-10-13T05:45:11.718006790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 05:45:11.843657 kubelet[2352]: E1013 05:45:11.843624 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" Oct 13 05:45:12.007074 kubelet[2352]: I1013 05:45:12.007021 2352 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:45:12.007543 kubelet[2352]: E1013 05:45:12.007489 2352 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Oct 13 05:45:12.075783 kubelet[2352]: E1013 05:45:12.075707 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:45:12.229457 kubelet[2352]: E1013 05:45:12.229322 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:45:12.269977 kubelet[2352]: E1013 05:45:12.269914 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:45:12.409180 kubelet[2352]: E1013 05:45:12.409120 2352 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:45:12.644924 kubelet[2352]: E1013 05:45:12.644866 2352 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="1.6s" Oct 13 05:45:12.809599 kubelet[2352]: I1013 05:45:12.809561 2352 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:45:12.809940 kubelet[2352]: E1013 05:45:12.809895 2352 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Oct 13 05:45:13.000833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832618414.mount: Deactivated successfully. Oct 13 05:45:13.007121 containerd[1592]: time="2025-10-13T05:45:13.007066246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:45:13.007983 containerd[1592]: time="2025-10-13T05:45:13.007905119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:45:13.011397 containerd[1592]: time="2025-10-13T05:45:13.011334870Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:45:13.014957 containerd[1592]: time="2025-10-13T05:45:13.014912117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:45:13.017238 containerd[1592]: time="2025-10-13T05:45:13.017174569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:45:13.018311 containerd[1592]: time="2025-10-13T05:45:13.018282366Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:45:13.019449 containerd[1592]: time="2025-10-13T05:45:13.019408217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:45:13.020106 containerd[1592]: time="2025-10-13T05:45:13.020064728Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.317623324s" Oct 13 05:45:13.020389 containerd[1592]: time="2025-10-13T05:45:13.020352859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:45:13.023113 containerd[1592]: time="2025-10-13T05:45:13.023068350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.306753742s" Oct 13 05:45:13.025075 containerd[1592]: time="2025-10-13T05:45:13.025045587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.304505898s" Oct 13 05:45:13.054667 containerd[1592]: time="2025-10-13T05:45:13.054581918Z" level=info msg="connecting to shim 9496af2c2b14e127dabe5ce7b4f1c13cced009aedda1bf2759eb3f04dc29a22d" address="unix:///run/containerd/s/368558ead1c5d4a7fbe7a3ba0f287c274c2a3dc1151eac28921c1144ceec1517" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:13.069680 containerd[1592]: time="2025-10-13T05:45:13.069022935Z" level=info msg="connecting to shim ca8a366b4d75e3f5e5e15a203069d0ed2f5471570d088808d608dece30fb5f9c" address="unix:///run/containerd/s/42d588628084c6c73ddefb8ff7996d770b9a38efebf882ae5623f25da6074916" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:13.071384 containerd[1592]: time="2025-10-13T05:45:13.071337334Z" level=info msg="connecting to shim 45dd12a2e14c9d244d66edcfea107275c8f08575366b7558d53f1ddda526c4f9" address="unix:///run/containerd/s/5222e8c395bbc1ff2646e0cfdd3703d71d9cee49ba8a923cef545bfac17483af" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:13.101002 systemd[1]: Started cri-containerd-9496af2c2b14e127dabe5ce7b4f1c13cced009aedda1bf2759eb3f04dc29a22d.scope - libcontainer container 9496af2c2b14e127dabe5ce7b4f1c13cced009aedda1bf2759eb3f04dc29a22d. Oct 13 05:45:13.105411 systemd[1]: Started cri-containerd-45dd12a2e14c9d244d66edcfea107275c8f08575366b7558d53f1ddda526c4f9.scope - libcontainer container 45dd12a2e14c9d244d66edcfea107275c8f08575366b7558d53f1ddda526c4f9. Oct 13 05:45:13.110707 systemd[1]: Started cri-containerd-ca8a366b4d75e3f5e5e15a203069d0ed2f5471570d088808d608dece30fb5f9c.scope - libcontainer container ca8a366b4d75e3f5e5e15a203069d0ed2f5471570d088808d608dece30fb5f9c. Oct 13 05:45:13.179550 containerd[1592]: time="2025-10-13T05:45:13.179001124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e5bc73bf287cfcc629b403efbf5492a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca8a366b4d75e3f5e5e15a203069d0ed2f5471570d088808d608dece30fb5f9c\"" Oct 13 05:45:13.189607 containerd[1592]: time="2025-10-13T05:45:13.189549192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9496af2c2b14e127dabe5ce7b4f1c13cced009aedda1bf2759eb3f04dc29a22d\"" Oct 13 05:45:13.191496 containerd[1592]: time="2025-10-13T05:45:13.191430619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"45dd12a2e14c9d244d66edcfea107275c8f08575366b7558d53f1ddda526c4f9\"" Oct 13 05:45:13.192102 containerd[1592]: time="2025-10-13T05:45:13.192062364Z" level=info msg="CreateContainer within sandbox \"ca8a366b4d75e3f5e5e15a203069d0ed2f5471570d088808d608dece30fb5f9c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:45:13.196157 containerd[1592]: time="2025-10-13T05:45:13.196120002Z" level=info msg="CreateContainer within sandbox \"9496af2c2b14e127dabe5ce7b4f1c13cced009aedda1bf2759eb3f04dc29a22d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:45:13.206607 containerd[1592]: time="2025-10-13T05:45:13.206569725Z" level=info msg="Container dfaaed6d8eb2b8863ab97cfdd85d1cb3f50043b9ca0dd3f9bb36349da2908c0d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:13.211343 containerd[1592]: time="2025-10-13T05:45:13.211310183Z" level=info msg="CreateContainer within sandbox \"45dd12a2e14c9d244d66edcfea107275c8f08575366b7558d53f1ddda526c4f9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:45:13.215121 containerd[1592]: time="2025-10-13T05:45:13.215091443Z" level=info msg="Container b40b2b59858f90b569057abc879df1e4b97e16ce140eeaef7776b875ebcbefe4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:13.220900 containerd[1592]: time="2025-10-13T05:45:13.220868475Z" level=info msg="CreateContainer within sandbox \"ca8a366b4d75e3f5e5e15a203069d0ed2f5471570d088808d608dece30fb5f9c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dfaaed6d8eb2b8863ab97cfdd85d1cb3f50043b9ca0dd3f9bb36349da2908c0d\"" Oct 13 05:45:13.221655 containerd[1592]: time="2025-10-13T05:45:13.221590820Z" level=info msg="StartContainer for \"dfaaed6d8eb2b8863ab97cfdd85d1cb3f50043b9ca0dd3f9bb36349da2908c0d\"" Oct 13 05:45:13.223169 containerd[1592]: time="2025-10-13T05:45:13.223109197Z" level=info msg="connecting to shim dfaaed6d8eb2b8863ab97cfdd85d1cb3f50043b9ca0dd3f9bb36349da2908c0d" address="unix:///run/containerd/s/42d588628084c6c73ddefb8ff7996d770b9a38efebf882ae5623f25da6074916" protocol=ttrpc version=3 Oct 13 05:45:13.225383 containerd[1592]: time="2025-10-13T05:45:13.225346191Z" level=info msg="CreateContainer within sandbox \"9496af2c2b14e127dabe5ce7b4f1c13cced009aedda1bf2759eb3f04dc29a22d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b40b2b59858f90b569057abc879df1e4b97e16ce140eeaef7776b875ebcbefe4\"" Oct 13 05:45:13.225868 containerd[1592]: time="2025-10-13T05:45:13.225838614Z" level=info msg="StartContainer for \"b40b2b59858f90b569057abc879df1e4b97e16ce140eeaef7776b875ebcbefe4\"" Oct 13 05:45:13.227016 containerd[1592]: time="2025-10-13T05:45:13.226993590Z" level=info msg="connecting to shim b40b2b59858f90b569057abc879df1e4b97e16ce140eeaef7776b875ebcbefe4" address="unix:///run/containerd/s/368558ead1c5d4a7fbe7a3ba0f287c274c2a3dc1151eac28921c1144ceec1517" protocol=ttrpc version=3 Oct 13 05:45:13.228613 containerd[1592]: time="2025-10-13T05:45:13.228049480Z" level=info msg="Container f88c1714f999ae3552c4367803daecc40d98f59716422cfaf057dd19000d85fb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:13.228677 kubelet[2352]: E1013 05:45:13.228246 2352 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:45:13.239013 containerd[1592]: time="2025-10-13T05:45:13.238354061Z" level=info msg="CreateContainer within sandbox \"45dd12a2e14c9d244d66edcfea107275c8f08575366b7558d53f1ddda526c4f9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f88c1714f999ae3552c4367803daecc40d98f59716422cfaf057dd19000d85fb\"" Oct 13 05:45:13.239158 containerd[1592]: time="2025-10-13T05:45:13.239131819Z" level=info msg="StartContainer for \"f88c1714f999ae3552c4367803daecc40d98f59716422cfaf057dd19000d85fb\"" Oct 13 05:45:13.241625 containerd[1592]: time="2025-10-13T05:45:13.241597262Z" level=info msg="connecting to shim f88c1714f999ae3552c4367803daecc40d98f59716422cfaf057dd19000d85fb" address="unix:///run/containerd/s/5222e8c395bbc1ff2646e0cfdd3703d71d9cee49ba8a923cef545bfac17483af" protocol=ttrpc version=3 Oct 13 05:45:13.246934 systemd[1]: Started cri-containerd-dfaaed6d8eb2b8863ab97cfdd85d1cb3f50043b9ca0dd3f9bb36349da2908c0d.scope - libcontainer container dfaaed6d8eb2b8863ab97cfdd85d1cb3f50043b9ca0dd3f9bb36349da2908c0d. Oct 13 05:45:13.250846 systemd[1]: Started cri-containerd-b40b2b59858f90b569057abc879df1e4b97e16ce140eeaef7776b875ebcbefe4.scope - libcontainer container b40b2b59858f90b569057abc879df1e4b97e16ce140eeaef7776b875ebcbefe4. Oct 13 05:45:13.275960 systemd[1]: Started cri-containerd-f88c1714f999ae3552c4367803daecc40d98f59716422cfaf057dd19000d85fb.scope - libcontainer container f88c1714f999ae3552c4367803daecc40d98f59716422cfaf057dd19000d85fb. Oct 13 05:45:13.355785 containerd[1592]: time="2025-10-13T05:45:13.354417620Z" level=info msg="StartContainer for \"dfaaed6d8eb2b8863ab97cfdd85d1cb3f50043b9ca0dd3f9bb36349da2908c0d\" returns successfully" Oct 13 05:45:13.361530 containerd[1592]: time="2025-10-13T05:45:13.361473189Z" level=info msg="StartContainer for \"b40b2b59858f90b569057abc879df1e4b97e16ce140eeaef7776b875ebcbefe4\" returns successfully" Oct 13 05:45:13.404616 containerd[1592]: time="2025-10-13T05:45:13.404567611Z" level=info msg="StartContainer for \"f88c1714f999ae3552c4367803daecc40d98f59716422cfaf057dd19000d85fb\" returns successfully" Oct 13 05:45:14.299442 kubelet[2352]: E1013 05:45:14.299253 2352 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:45:14.304353 kubelet[2352]: E1013 05:45:14.304314 2352 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:45:14.307061 kubelet[2352]: E1013 05:45:14.307046 2352 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:45:14.411614 kubelet[2352]: I1013 05:45:14.411560 2352 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:45:14.627846 kubelet[2352]: E1013 05:45:14.627798 2352 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:45:14.891907 kubelet[2352]: I1013 05:45:14.889827 2352 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:45:14.891907 kubelet[2352]: E1013 05:45:14.889899 2352 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:45:14.941486 kubelet[2352]: I1013 05:45:14.941409 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:14.949734 kubelet[2352]: E1013 05:45:14.949663 2352 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:14.949734 kubelet[2352]: I1013 05:45:14.949702 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:14.952115 kubelet[2352]: E1013 05:45:14.951859 2352 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:14.952115 kubelet[2352]: I1013 05:45:14.951895 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:14.954422 kubelet[2352]: E1013 05:45:14.954037 2352 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:15.226282 kubelet[2352]: I1013 05:45:15.226139 2352 apiserver.go:52] "Watching apiserver" Oct 13 05:45:15.241568 kubelet[2352]: I1013 05:45:15.241510 2352 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:45:15.308385 kubelet[2352]: I1013 05:45:15.308347 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:15.309507 kubelet[2352]: I1013 05:45:15.308476 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:15.309507 kubelet[2352]: I1013 05:45:15.308556 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:15.310885 kubelet[2352]: E1013 05:45:15.310855 2352 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:15.310948 kubelet[2352]: E1013 05:45:15.310860 2352 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:15.311176 kubelet[2352]: E1013 05:45:15.311147 2352 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:16.309706 kubelet[2352]: I1013 05:45:16.309672 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:16.310344 kubelet[2352]: I1013 05:45:16.310047 2352 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:16.964944 systemd[1]: Reload requested from client PID 2641 ('systemctl') (unit session-7.scope)... Oct 13 05:45:16.964959 systemd[1]: Reloading... Oct 13 05:45:17.045142 zram_generator::config[2685]: No configuration found. Oct 13 05:45:17.270249 systemd[1]: Reloading finished in 304 ms. Oct 13 05:45:17.303599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:45:17.326837 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:45:17.327124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:45:17.327175 systemd[1]: kubelet.service: Consumed 939ms CPU time, 125.6M memory peak. Oct 13 05:45:17.329106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:45:17.563445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:45:17.567869 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:45:17.612753 kubelet[2730]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:45:17.612753 kubelet[2730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:45:17.613159 kubelet[2730]: I1013 05:45:17.612796 2730 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:45:17.619105 kubelet[2730]: I1013 05:45:17.619068 2730 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:45:17.619105 kubelet[2730]: I1013 05:45:17.619090 2730 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:45:17.619175 kubelet[2730]: I1013 05:45:17.619121 2730 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:45:17.619175 kubelet[2730]: I1013 05:45:17.619129 2730 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:45:17.619357 kubelet[2730]: I1013 05:45:17.619328 2730 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:45:17.621572 kubelet[2730]: I1013 05:45:17.621065 2730 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:45:17.623806 kubelet[2730]: I1013 05:45:17.623776 2730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:45:17.626990 kubelet[2730]: I1013 05:45:17.626971 2730 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:45:17.631863 kubelet[2730]: I1013 05:45:17.631823 2730 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:45:17.632076 kubelet[2730]: I1013 05:45:17.632034 2730 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:45:17.632224 kubelet[2730]: I1013 05:45:17.632063 2730 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:45:17.632224 kubelet[2730]: I1013 05:45:17.632225 2730 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:45:17.632338 kubelet[2730]: I1013 05:45:17.632234 2730 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:45:17.632338 kubelet[2730]: I1013 05:45:17.632256 2730 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:45:17.633052 kubelet[2730]: I1013 05:45:17.633022 2730 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:45:17.633207 kubelet[2730]: I1013 05:45:17.633185 2730 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:45:17.633236 kubelet[2730]: I1013 05:45:17.633209 2730 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:45:17.633264 kubelet[2730]: I1013 05:45:17.633255 2730 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:45:17.633298 kubelet[2730]: I1013 05:45:17.633277 2730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:45:17.634584 kubelet[2730]: I1013 05:45:17.634414 2730 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:45:17.668446 kubelet[2730]: I1013 05:45:17.668302 2730 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:45:17.668446 kubelet[2730]: I1013 05:45:17.668356 2730 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:45:17.674044 kubelet[2730]: I1013 05:45:17.672562 2730 server.go:1262] "Started kubelet" Oct 13 05:45:17.674044 kubelet[2730]: I1013 05:45:17.673968 2730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:45:17.675172 kubelet[2730]: I1013 05:45:17.675131 2730 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:45:17.675339 kubelet[2730]: I1013 05:45:17.675305 2730 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:45:17.675396 kubelet[2730]: I1013 05:45:17.675375 2730 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:45:17.677699 kubelet[2730]: I1013 05:45:17.677680 2730 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:45:17.679740 kubelet[2730]: I1013 05:45:17.679710 2730 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:45:17.682461 kubelet[2730]: I1013 05:45:17.681984 2730 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:45:17.682461 kubelet[2730]: E1013 05:45:17.682239 2730 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:45:17.682859 kubelet[2730]: I1013 05:45:17.682831 2730 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:45:17.684426 kubelet[2730]: I1013 05:45:17.683907 2730 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:45:17.684426 kubelet[2730]: I1013 05:45:17.684168 2730 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:45:17.686537 kubelet[2730]: I1013 05:45:17.686515 2730 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:45:17.686767 kubelet[2730]: I1013 05:45:17.686724 2730 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:45:17.687767 kubelet[2730]: E1013 05:45:17.687720 2730 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:45:17.690918 kubelet[2730]: I1013 05:45:17.690884 2730 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:45:17.697824 kubelet[2730]: I1013 05:45:17.697780 2730 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:45:17.699079 kubelet[2730]: I1013 05:45:17.699048 2730 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:45:17.699079 kubelet[2730]: I1013 05:45:17.699074 2730 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:45:17.699147 kubelet[2730]: I1013 05:45:17.699103 2730 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:45:17.699178 kubelet[2730]: E1013 05:45:17.699143 2730 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:45:17.739053 kubelet[2730]: I1013 05:45:17.739018 2730 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:45:17.739053 kubelet[2730]: I1013 05:45:17.739036 2730 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:45:17.739053 kubelet[2730]: I1013 05:45:17.739056 2730 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:45:17.739253 kubelet[2730]: I1013 05:45:17.739187 2730 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:45:17.739253 kubelet[2730]: I1013 05:45:17.739197 2730 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:45:17.739253 kubelet[2730]: I1013 05:45:17.739217 2730 policy_none.go:49] "None policy: Start" Oct 13 05:45:17.739253 kubelet[2730]: I1013 05:45:17.739230 2730 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:45:17.739253 kubelet[2730]: I1013 05:45:17.739240 2730 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:45:17.739367 kubelet[2730]: I1013 05:45:17.739335 2730 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:45:17.739367 kubelet[2730]: I1013 05:45:17.739344 2730 policy_none.go:47] "Start" Oct 13 05:45:17.744006 kubelet[2730]: E1013 05:45:17.743972 2730 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:45:17.744402 kubelet[2730]: I1013 05:45:17.744183 2730 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:45:17.744402 kubelet[2730]: I1013 05:45:17.744199 2730 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:45:17.744488 kubelet[2730]: I1013 05:45:17.744415 2730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:45:17.745248 kubelet[2730]: E1013 05:45:17.745213 2730 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:45:17.800138 kubelet[2730]: I1013 05:45:17.800102 2730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:17.800138 kubelet[2730]: I1013 05:45:17.800148 2730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:17.801158 kubelet[2730]: I1013 05:45:17.800427 2730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:17.806519 kubelet[2730]: E1013 05:45:17.806477 2730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:17.806619 kubelet[2730]: E1013 05:45:17.806541 2730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:17.846146 kubelet[2730]: I1013 05:45:17.846116 2730 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:45:17.852381 kubelet[2730]: I1013 05:45:17.852352 2730 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:45:17.852498 kubelet[2730]: I1013 05:45:17.852471 2730 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:45:17.985530 kubelet[2730]: I1013 05:45:17.985486 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:17.985530 kubelet[2730]: I1013 05:45:17.985523 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5bc73bf287cfcc629b403efbf5492a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5bc73bf287cfcc629b403efbf5492a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:17.985530 kubelet[2730]: I1013 05:45:17.985540 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5bc73bf287cfcc629b403efbf5492a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e5bc73bf287cfcc629b403efbf5492a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:17.985718 kubelet[2730]: I1013 05:45:17.985556 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5bc73bf287cfcc629b403efbf5492a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e5bc73bf287cfcc629b403efbf5492a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:17.985718 kubelet[2730]: I1013 05:45:17.985572 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:17.985718 kubelet[2730]: I1013 05:45:17.985586 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:17.985821 kubelet[2730]: I1013 05:45:17.985696 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:17.985821 kubelet[2730]: I1013 05:45:17.985766 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:17.985821 kubelet[2730]: I1013 05:45:17.985790 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:45:18.635357 kubelet[2730]: I1013 05:45:18.635296 2730 apiserver.go:52] "Watching apiserver" Oct 13 05:45:18.684409 kubelet[2730]: I1013 05:45:18.684361 2730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:45:18.715598 kubelet[2730]: I1013 05:45:18.715323 2730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:18.716840 kubelet[2730]: I1013 05:45:18.716794 2730 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:18.726774 kubelet[2730]: E1013 05:45:18.725666 2730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:45:18.726774 kubelet[2730]: E1013 05:45:18.725987 2730 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:45:18.749093 kubelet[2730]: I1013 05:45:18.749019 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.748991432 podStartE2EDuration="2.748991432s" podCreationTimestamp="2025-10-13 05:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:45:18.737975749 +0000 UTC m=+1.165584110" watchObservedRunningTime="2025-10-13 05:45:18.748991432 +0000 UTC m=+1.176599793" Oct 13 05:45:18.749264 kubelet[2730]: I1013 05:45:18.749149 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7491446370000001 podStartE2EDuration="1.749144637s" podCreationTimestamp="2025-10-13 05:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:45:18.749134137 +0000 UTC m=+1.176742488" watchObservedRunningTime="2025-10-13 05:45:18.749144637 +0000 UTC m=+1.176752998" Oct 13 05:45:18.757104 kubelet[2730]: I1013 05:45:18.757038 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.757006967 podStartE2EDuration="2.757006967s" podCreationTimestamp="2025-10-13 05:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:45:18.756743811 +0000 UTC m=+1.184352172" watchObservedRunningTime="2025-10-13 05:45:18.757006967 +0000 UTC m=+1.184615328" Oct 13 05:45:24.112196 kubelet[2730]: I1013 05:45:24.112147 2730 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:45:24.113283 kubelet[2730]: I1013 05:45:24.112607 2730 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:45:24.113539 containerd[1592]: time="2025-10-13T05:45:24.112442431Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:45:25.083712 systemd[1]: Created slice kubepods-besteffort-pod72f4756f_7f2a_4e5e_881f_56e212b994ab.slice - libcontainer container kubepods-besteffort-pod72f4756f_7f2a_4e5e_881f_56e212b994ab.slice. Oct 13 05:45:25.126277 kubelet[2730]: I1013 05:45:25.126227 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72f4756f-7f2a-4e5e-881f-56e212b994ab-kube-proxy\") pod \"kube-proxy-c4j7q\" (UID: \"72f4756f-7f2a-4e5e-881f-56e212b994ab\") " pod="kube-system/kube-proxy-c4j7q" Oct 13 05:45:25.126277 kubelet[2730]: I1013 05:45:25.126274 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72f4756f-7f2a-4e5e-881f-56e212b994ab-lib-modules\") pod \"kube-proxy-c4j7q\" (UID: \"72f4756f-7f2a-4e5e-881f-56e212b994ab\") " pod="kube-system/kube-proxy-c4j7q" Oct 13 05:45:25.126277 kubelet[2730]: I1013 05:45:25.126294 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72f4756f-7f2a-4e5e-881f-56e212b994ab-xtables-lock\") pod \"kube-proxy-c4j7q\" (UID: \"72f4756f-7f2a-4e5e-881f-56e212b994ab\") " pod="kube-system/kube-proxy-c4j7q" Oct 13 05:45:25.126743 kubelet[2730]: I1013 05:45:25.126310 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx755\" (UniqueName: \"kubernetes.io/projected/72f4756f-7f2a-4e5e-881f-56e212b994ab-kube-api-access-lx755\") pod \"kube-proxy-c4j7q\" (UID: \"72f4756f-7f2a-4e5e-881f-56e212b994ab\") " pod="kube-system/kube-proxy-c4j7q" Oct 13 05:45:25.235675 systemd[1]: Created slice kubepods-besteffort-podbab8350f_5321_44e7_8406_afc51d3fffef.slice - libcontainer container kubepods-besteffort-podbab8350f_5321_44e7_8406_afc51d3fffef.slice. Oct 13 05:45:25.327324 kubelet[2730]: I1013 05:45:25.327277 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc886\" (UniqueName: \"kubernetes.io/projected/bab8350f-5321-44e7-8406-afc51d3fffef-kube-api-access-lc886\") pod \"tigera-operator-db78d5bd4-hrxth\" (UID: \"bab8350f-5321-44e7-8406-afc51d3fffef\") " pod="tigera-operator/tigera-operator-db78d5bd4-hrxth" Oct 13 05:45:25.327324 kubelet[2730]: I1013 05:45:25.327317 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bab8350f-5321-44e7-8406-afc51d3fffef-var-lib-calico\") pod \"tigera-operator-db78d5bd4-hrxth\" (UID: \"bab8350f-5321-44e7-8406-afc51d3fffef\") " pod="tigera-operator/tigera-operator-db78d5bd4-hrxth" Oct 13 05:45:25.397629 containerd[1592]: time="2025-10-13T05:45:25.397556025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4j7q,Uid:72f4756f-7f2a-4e5e-881f-56e212b994ab,Namespace:kube-system,Attempt:0,}" Oct 13 05:45:25.418807 containerd[1592]: time="2025-10-13T05:45:25.418374621Z" level=info msg="connecting to shim a1a482c6770f72291ce51833e36e1d70940f1324fd19df1ff83966881a2906c3" address="unix:///run/containerd/s/5006f7aa252e4274ec130fde9eb3231993814c90590122e3d4e44dbfb3016cf9" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:25.450883 systemd[1]: Started cri-containerd-a1a482c6770f72291ce51833e36e1d70940f1324fd19df1ff83966881a2906c3.scope - libcontainer container a1a482c6770f72291ce51833e36e1d70940f1324fd19df1ff83966881a2906c3. Oct 13 05:45:25.476071 containerd[1592]: time="2025-10-13T05:45:25.476018265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c4j7q,Uid:72f4756f-7f2a-4e5e-881f-56e212b994ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1a482c6770f72291ce51833e36e1d70940f1324fd19df1ff83966881a2906c3\"" Oct 13 05:45:25.482424 containerd[1592]: time="2025-10-13T05:45:25.482387563Z" level=info msg="CreateContainer within sandbox \"a1a482c6770f72291ce51833e36e1d70940f1324fd19df1ff83966881a2906c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:45:25.492690 containerd[1592]: time="2025-10-13T05:45:25.492657951Z" level=info msg="Container 77f9be3221a35818cee1fd1cde82d39de769feb0fec62c98a392298237dcc173: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:25.500377 containerd[1592]: time="2025-10-13T05:45:25.500345754Z" level=info msg="CreateContainer within sandbox \"a1a482c6770f72291ce51833e36e1d70940f1324fd19df1ff83966881a2906c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"77f9be3221a35818cee1fd1cde82d39de769feb0fec62c98a392298237dcc173\"" Oct 13 05:45:25.500864 containerd[1592]: time="2025-10-13T05:45:25.500842952Z" level=info msg="StartContainer for \"77f9be3221a35818cee1fd1cde82d39de769feb0fec62c98a392298237dcc173\"" Oct 13 05:45:25.502128 containerd[1592]: time="2025-10-13T05:45:25.502108136Z" level=info msg="connecting to shim 77f9be3221a35818cee1fd1cde82d39de769feb0fec62c98a392298237dcc173" address="unix:///run/containerd/s/5006f7aa252e4274ec130fde9eb3231993814c90590122e3d4e44dbfb3016cf9" protocol=ttrpc version=3 Oct 13 05:45:25.524965 systemd[1]: Started cri-containerd-77f9be3221a35818cee1fd1cde82d39de769feb0fec62c98a392298237dcc173.scope - libcontainer container 77f9be3221a35818cee1fd1cde82d39de769feb0fec62c98a392298237dcc173. Oct 13 05:45:25.549520 containerd[1592]: time="2025-10-13T05:45:25.549482713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-hrxth,Uid:bab8350f-5321-44e7-8406-afc51d3fffef,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:45:25.567194 containerd[1592]: time="2025-10-13T05:45:25.567150440Z" level=info msg="StartContainer for \"77f9be3221a35818cee1fd1cde82d39de769feb0fec62c98a392298237dcc173\" returns successfully" Oct 13 05:45:25.573549 containerd[1592]: time="2025-10-13T05:45:25.573487016Z" level=info msg="connecting to shim e9fb415f55a7dee463dbb63332f6a1725bcaa1ecfd6a99ab1b431fddb6af88f4" address="unix:///run/containerd/s/00a6509adcf3c6b1214861c00eaa3684aba361a22149986a4ee1f5a26b1f25ea" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:25.601022 systemd[1]: Started cri-containerd-e9fb415f55a7dee463dbb63332f6a1725bcaa1ecfd6a99ab1b431fddb6af88f4.scope - libcontainer container e9fb415f55a7dee463dbb63332f6a1725bcaa1ecfd6a99ab1b431fddb6af88f4. Oct 13 05:45:25.648496 containerd[1592]: time="2025-10-13T05:45:25.648375600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-hrxth,Uid:bab8350f-5321-44e7-8406-afc51d3fffef,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e9fb415f55a7dee463dbb63332f6a1725bcaa1ecfd6a99ab1b431fddb6af88f4\"" Oct 13 05:45:25.649942 containerd[1592]: time="2025-10-13T05:45:25.649839472Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:45:25.746369 kubelet[2730]: I1013 05:45:25.746297 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c4j7q" podStartSLOduration=0.746278479 podStartE2EDuration="746.278479ms" podCreationTimestamp="2025-10-13 05:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:45:25.745847537 +0000 UTC m=+8.173455898" watchObservedRunningTime="2025-10-13 05:45:25.746278479 +0000 UTC m=+8.173886830" Oct 13 05:45:26.242233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714658109.mount: Deactivated successfully. Oct 13 05:45:26.769709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060478900.mount: Deactivated successfully. Oct 13 05:45:28.321214 containerd[1592]: time="2025-10-13T05:45:28.321152206Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:28.321846 containerd[1592]: time="2025-10-13T05:45:28.321823032Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:45:28.322904 containerd[1592]: time="2025-10-13T05:45:28.322868710Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:28.324732 containerd[1592]: time="2025-10-13T05:45:28.324695294Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:28.325256 containerd[1592]: time="2025-10-13T05:45:28.325227847Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.675360171s" Oct 13 05:45:28.325289 containerd[1592]: time="2025-10-13T05:45:28.325254267Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:45:28.338142 containerd[1592]: time="2025-10-13T05:45:28.338090887Z" level=info msg="CreateContainer within sandbox \"e9fb415f55a7dee463dbb63332f6a1725bcaa1ecfd6a99ab1b431fddb6af88f4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:45:28.346216 containerd[1592]: time="2025-10-13T05:45:28.346175310Z" level=info msg="Container 3689977bb6782512da71109e7fd30b08c3bb6b0a38332d2616a5fd1c5787b879: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:28.353070 containerd[1592]: time="2025-10-13T05:45:28.353017492Z" level=info msg="CreateContainer within sandbox \"e9fb415f55a7dee463dbb63332f6a1725bcaa1ecfd6a99ab1b431fddb6af88f4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3689977bb6782512da71109e7fd30b08c3bb6b0a38332d2616a5fd1c5787b879\"" Oct 13 05:45:28.353460 containerd[1592]: time="2025-10-13T05:45:28.353421530Z" level=info msg="StartContainer for \"3689977bb6782512da71109e7fd30b08c3bb6b0a38332d2616a5fd1c5787b879\"" Oct 13 05:45:28.354168 containerd[1592]: time="2025-10-13T05:45:28.354142862Z" level=info msg="connecting to shim 3689977bb6782512da71109e7fd30b08c3bb6b0a38332d2616a5fd1c5787b879" address="unix:///run/containerd/s/00a6509adcf3c6b1214861c00eaa3684aba361a22149986a4ee1f5a26b1f25ea" protocol=ttrpc version=3 Oct 13 05:45:28.413911 systemd[1]: Started cri-containerd-3689977bb6782512da71109e7fd30b08c3bb6b0a38332d2616a5fd1c5787b879.scope - libcontainer container 3689977bb6782512da71109e7fd30b08c3bb6b0a38332d2616a5fd1c5787b879. Oct 13 05:45:28.444329 containerd[1592]: time="2025-10-13T05:45:28.444289846Z" level=info msg="StartContainer for \"3689977bb6782512da71109e7fd30b08c3bb6b0a38332d2616a5fd1c5787b879\" returns successfully" Oct 13 05:45:28.764309 kubelet[2730]: I1013 05:45:28.764034 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-hrxth" podStartSLOduration=1.08250015 podStartE2EDuration="3.764016419s" podCreationTimestamp="2025-10-13 05:45:25 +0000 UTC" firstStartedPulling="2025-10-13 05:45:25.649518781 +0000 UTC m=+8.077127142" lastFinishedPulling="2025-10-13 05:45:28.33103505 +0000 UTC m=+10.758643411" observedRunningTime="2025-10-13 05:45:28.763969701 +0000 UTC m=+11.191578062" watchObservedRunningTime="2025-10-13 05:45:28.764016419 +0000 UTC m=+11.191624780" Oct 13 05:45:31.335109 update_engine[1575]: I20251013 05:45:31.334908 1575 update_attempter.cc:509] Updating boot flags... Oct 13 05:45:35.124608 sudo[1801]: pam_unix(sudo:session): session closed for user root Oct 13 05:45:35.127203 sshd[1800]: Connection closed by 10.0.0.1 port 40192 Oct 13 05:45:35.130554 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Oct 13 05:45:35.136683 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:40192.service: Deactivated successfully. Oct 13 05:45:35.141856 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:45:35.142127 systemd[1]: session-7.scope: Consumed 5.675s CPU time, 233.5M memory peak. Oct 13 05:45:35.144319 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:45:35.146222 systemd-logind[1573]: Removed session 7. Oct 13 05:45:38.045693 systemd[1]: Created slice kubepods-besteffort-podaa9d4323_2829_4950_bf3c_801a94101f5a.slice - libcontainer container kubepods-besteffort-podaa9d4323_2829_4950_bf3c_801a94101f5a.slice. Oct 13 05:45:38.132387 kubelet[2730]: I1013 05:45:38.132330 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa9d4323-2829-4950-bf3c-801a94101f5a-tigera-ca-bundle\") pod \"calico-typha-55c8b8855b-z8tfl\" (UID: \"aa9d4323-2829-4950-bf3c-801a94101f5a\") " pod="calico-system/calico-typha-55c8b8855b-z8tfl" Oct 13 05:45:38.132387 kubelet[2730]: I1013 05:45:38.132396 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvfmj\" (UniqueName: \"kubernetes.io/projected/aa9d4323-2829-4950-bf3c-801a94101f5a-kube-api-access-wvfmj\") pod \"calico-typha-55c8b8855b-z8tfl\" (UID: \"aa9d4323-2829-4950-bf3c-801a94101f5a\") " pod="calico-system/calico-typha-55c8b8855b-z8tfl" Oct 13 05:45:38.133075 kubelet[2730]: I1013 05:45:38.132418 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aa9d4323-2829-4950-bf3c-801a94101f5a-typha-certs\") pod \"calico-typha-55c8b8855b-z8tfl\" (UID: \"aa9d4323-2829-4950-bf3c-801a94101f5a\") " pod="calico-system/calico-typha-55c8b8855b-z8tfl" Oct 13 05:45:38.482565 containerd[1592]: time="2025-10-13T05:45:38.482477894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c8b8855b-z8tfl,Uid:aa9d4323-2829-4950-bf3c-801a94101f5a,Namespace:calico-system,Attempt:0,}" Oct 13 05:45:38.719003 systemd[1]: Created slice kubepods-besteffort-podcedb7ac7_bccf_4314_adbc_086ba0cd00d1.slice - libcontainer container kubepods-besteffort-podcedb7ac7_bccf_4314_adbc_086ba0cd00d1.slice. Oct 13 05:45:38.740100 containerd[1592]: time="2025-10-13T05:45:38.738927756Z" level=info msg="connecting to shim a88ab238c1ed275e5523b5c02c8912a5d60ce03a90fe0855b28f9dbbec0bff77" address="unix:///run/containerd/s/ff5da72deaa828e92996850696766ab7437065be0f13321b2471d8f3ac05c951" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:38.786193 systemd[1]: Started cri-containerd-a88ab238c1ed275e5523b5c02c8912a5d60ce03a90fe0855b28f9dbbec0bff77.scope - libcontainer container a88ab238c1ed275e5523b5c02c8912a5d60ce03a90fe0855b28f9dbbec0bff77. Oct 13 05:45:38.786803 kubelet[2730]: E1013 05:45:38.786733 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llxgm" podUID="b3e721b8-9665-4f46-9b9b-bf2346733bde" Oct 13 05:45:38.837688 kubelet[2730]: I1013 05:45:38.837640 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-xtables-lock\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.837688 kubelet[2730]: I1013 05:45:38.837685 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-cni-log-dir\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838002 kubelet[2730]: I1013 05:45:38.837854 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-lib-modules\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838002 kubelet[2730]: I1013 05:45:38.837875 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-cni-bin-dir\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838002 kubelet[2730]: I1013 05:45:38.837889 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-var-run-calico\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838002 kubelet[2730]: I1013 05:45:38.837943 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7lg\" (UniqueName: \"kubernetes.io/projected/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-kube-api-access-th7lg\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838002 kubelet[2730]: I1013 05:45:38.837967 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-tigera-ca-bundle\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838116 kubelet[2730]: I1013 05:45:38.838086 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-flexvol-driver-host\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838116 kubelet[2730]: I1013 05:45:38.838105 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-policysync\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838161 kubelet[2730]: I1013 05:45:38.838119 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-var-lib-calico\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838260 kubelet[2730]: I1013 05:45:38.838171 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-cni-net-dir\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.838260 kubelet[2730]: I1013 05:45:38.838193 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cedb7ac7-bccf-4314-adbc-086ba0cd00d1-node-certs\") pod \"calico-node-p65nb\" (UID: \"cedb7ac7-bccf-4314-adbc-086ba0cd00d1\") " pod="calico-system/calico-node-p65nb" Oct 13 05:45:38.841767 containerd[1592]: time="2025-10-13T05:45:38.841681823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c8b8855b-z8tfl,Uid:aa9d4323-2829-4950-bf3c-801a94101f5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"a88ab238c1ed275e5523b5c02c8912a5d60ce03a90fe0855b28f9dbbec0bff77\"" Oct 13 05:45:38.844271 containerd[1592]: time="2025-10-13T05:45:38.844153232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:45:38.939342 kubelet[2730]: I1013 05:45:38.939288 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b3e721b8-9665-4f46-9b9b-bf2346733bde-kubelet-dir\") pod \"csi-node-driver-llxgm\" (UID: \"b3e721b8-9665-4f46-9b9b-bf2346733bde\") " pod="calico-system/csi-node-driver-llxgm" Oct 13 05:45:38.939559 kubelet[2730]: I1013 05:45:38.939368 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcfhf\" (UniqueName: \"kubernetes.io/projected/b3e721b8-9665-4f46-9b9b-bf2346733bde-kube-api-access-kcfhf\") pod \"csi-node-driver-llxgm\" (UID: \"b3e721b8-9665-4f46-9b9b-bf2346733bde\") " pod="calico-system/csi-node-driver-llxgm" Oct 13 05:45:38.939559 kubelet[2730]: I1013 05:45:38.939402 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b3e721b8-9665-4f46-9b9b-bf2346733bde-varrun\") pod \"csi-node-driver-llxgm\" (UID: \"b3e721b8-9665-4f46-9b9b-bf2346733bde\") " pod="calico-system/csi-node-driver-llxgm" Oct 13 05:45:38.939559 kubelet[2730]: I1013 05:45:38.939438 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b3e721b8-9665-4f46-9b9b-bf2346733bde-registration-dir\") pod \"csi-node-driver-llxgm\" (UID: \"b3e721b8-9665-4f46-9b9b-bf2346733bde\") " pod="calico-system/csi-node-driver-llxgm" Oct 13 05:45:38.939559 kubelet[2730]: I1013 05:45:38.939452 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b3e721b8-9665-4f46-9b9b-bf2346733bde-socket-dir\") pod \"csi-node-driver-llxgm\" (UID: \"b3e721b8-9665-4f46-9b9b-bf2346733bde\") " pod="calico-system/csi-node-driver-llxgm" Oct 13 05:45:38.940671 kubelet[2730]: E1013 05:45:38.940612 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.940953 kubelet[2730]: W1013 05:45:38.940674 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.940953 kubelet[2730]: E1013 05:45:38.940792 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:38.942788 kubelet[2730]: E1013 05:45:38.942139 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.942788 kubelet[2730]: W1013 05:45:38.942190 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.942788 kubelet[2730]: E1013 05:45:38.942201 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:38.942788 kubelet[2730]: E1013 05:45:38.942541 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.942788 kubelet[2730]: W1013 05:45:38.942558 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.942788 kubelet[2730]: E1013 05:45:38.942582 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:38.943364 kubelet[2730]: E1013 05:45:38.942972 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.943364 kubelet[2730]: W1013 05:45:38.942992 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.943364 kubelet[2730]: E1013 05:45:38.943014 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:38.943581 kubelet[2730]: E1013 05:45:38.943529 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.943581 kubelet[2730]: W1013 05:45:38.943539 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.943581 kubelet[2730]: E1013 05:45:38.943548 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:38.944213 kubelet[2730]: E1013 05:45:38.944193 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.944213 kubelet[2730]: W1013 05:45:38.944209 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.944378 kubelet[2730]: E1013 05:45:38.944288 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:38.949811 kubelet[2730]: E1013 05:45:38.949730 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.949942 kubelet[2730]: W1013 05:45:38.949810 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.949942 kubelet[2730]: E1013 05:45:38.949854 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:38.952692 kubelet[2730]: E1013 05:45:38.952658 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:38.952692 kubelet[2730]: W1013 05:45:38.952675 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:38.952921 kubelet[2730]: E1013 05:45:38.952703 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.026286 containerd[1592]: time="2025-10-13T05:45:39.026180481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p65nb,Uid:cedb7ac7-bccf-4314-adbc-086ba0cd00d1,Namespace:calico-system,Attempt:0,}" Oct 13 05:45:39.040999 kubelet[2730]: E1013 05:45:39.040960 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.040999 kubelet[2730]: W1013 05:45:39.040990 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.041173 kubelet[2730]: E1013 05:45:39.041014 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.041354 kubelet[2730]: E1013 05:45:39.041333 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.041354 kubelet[2730]: W1013 05:45:39.041351 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.041412 kubelet[2730]: E1013 05:45:39.041362 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.041665 kubelet[2730]: E1013 05:45:39.041627 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.041665 kubelet[2730]: W1013 05:45:39.041641 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.041874 kubelet[2730]: E1013 05:45:39.041675 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.042144 kubelet[2730]: E1013 05:45:39.042111 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.042144 kubelet[2730]: W1013 05:45:39.042129 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.042144 kubelet[2730]: E1013 05:45:39.042142 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.042401 kubelet[2730]: E1013 05:45:39.042379 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.042401 kubelet[2730]: W1013 05:45:39.042396 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.042575 kubelet[2730]: E1013 05:45:39.042409 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.042892 kubelet[2730]: E1013 05:45:39.042865 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.042892 kubelet[2730]: W1013 05:45:39.042887 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.042991 kubelet[2730]: E1013 05:45:39.042901 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.043164 kubelet[2730]: E1013 05:45:39.043141 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.043164 kubelet[2730]: W1013 05:45:39.043161 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.043226 kubelet[2730]: E1013 05:45:39.043173 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.043406 kubelet[2730]: E1013 05:45:39.043380 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.043406 kubelet[2730]: W1013 05:45:39.043391 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.043406 kubelet[2730]: E1013 05:45:39.043402 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.043629 kubelet[2730]: E1013 05:45:39.043610 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.043629 kubelet[2730]: W1013 05:45:39.043624 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.043683 kubelet[2730]: E1013 05:45:39.043634 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.043879 kubelet[2730]: E1013 05:45:39.043860 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.043879 kubelet[2730]: W1013 05:45:39.043872 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.043928 kubelet[2730]: E1013 05:45:39.043882 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.044114 kubelet[2730]: E1013 05:45:39.044093 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.044114 kubelet[2730]: W1013 05:45:39.044107 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.044187 kubelet[2730]: E1013 05:45:39.044117 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.044326 kubelet[2730]: E1013 05:45:39.044309 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.044326 kubelet[2730]: W1013 05:45:39.044321 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.044379 kubelet[2730]: E1013 05:45:39.044330 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.044582 kubelet[2730]: E1013 05:45:39.044564 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.044582 kubelet[2730]: W1013 05:45:39.044576 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.044651 kubelet[2730]: E1013 05:45:39.044595 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.044895 kubelet[2730]: E1013 05:45:39.044868 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.044895 kubelet[2730]: W1013 05:45:39.044891 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.044988 kubelet[2730]: E1013 05:45:39.044910 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.045166 kubelet[2730]: E1013 05:45:39.045147 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.045166 kubelet[2730]: W1013 05:45:39.045160 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.045262 kubelet[2730]: E1013 05:45:39.045170 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.045441 kubelet[2730]: E1013 05:45:39.045405 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.045441 kubelet[2730]: W1013 05:45:39.045428 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.045604 kubelet[2730]: E1013 05:45:39.045454 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.045726 kubelet[2730]: E1013 05:45:39.045706 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.045726 kubelet[2730]: W1013 05:45:39.045717 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.045726 kubelet[2730]: E1013 05:45:39.045725 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.045973 kubelet[2730]: E1013 05:45:39.045954 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.045973 kubelet[2730]: W1013 05:45:39.045965 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.045973 kubelet[2730]: E1013 05:45:39.045974 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.046335 kubelet[2730]: E1013 05:45:39.046312 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.046335 kubelet[2730]: W1013 05:45:39.046328 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.046449 kubelet[2730]: E1013 05:45:39.046340 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.046644 kubelet[2730]: E1013 05:45:39.046613 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.046644 kubelet[2730]: W1013 05:45:39.046629 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.046729 kubelet[2730]: E1013 05:45:39.046648 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.046897 kubelet[2730]: E1013 05:45:39.046876 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.046897 kubelet[2730]: W1013 05:45:39.046889 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.046983 kubelet[2730]: E1013 05:45:39.046900 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.047174 kubelet[2730]: E1013 05:45:39.047145 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.047174 kubelet[2730]: W1013 05:45:39.047170 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.047303 kubelet[2730]: E1013 05:45:39.047181 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.047452 kubelet[2730]: E1013 05:45:39.047435 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.047452 kubelet[2730]: W1013 05:45:39.047446 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.047532 kubelet[2730]: E1013 05:45:39.047455 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.047735 kubelet[2730]: E1013 05:45:39.047700 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.047735 kubelet[2730]: W1013 05:45:39.047721 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.047735 kubelet[2730]: E1013 05:45:39.047778 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.048283 kubelet[2730]: E1013 05:45:39.048265 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.048283 kubelet[2730]: W1013 05:45:39.048279 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.048393 kubelet[2730]: E1013 05:45:39.048299 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.055612 containerd[1592]: time="2025-10-13T05:45:39.055037054Z" level=info msg="connecting to shim 20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579" address="unix:///run/containerd/s/01a44e71072210c48cf3edf413934a79bb531f419c8c4339ea0957b78ac534b5" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:39.063523 kubelet[2730]: E1013 05:45:39.063496 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:39.063523 kubelet[2730]: W1013 05:45:39.063516 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:39.063646 kubelet[2730]: E1013 05:45:39.063536 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:39.086939 systemd[1]: Started cri-containerd-20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579.scope - libcontainer container 20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579. Oct 13 05:45:39.114081 containerd[1592]: time="2025-10-13T05:45:39.114035117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p65nb,Uid:cedb7ac7-bccf-4314-adbc-086ba0cd00d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579\"" Oct 13 05:45:40.112457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1851652124.mount: Deactivated successfully. Oct 13 05:45:40.699733 kubelet[2730]: E1013 05:45:40.699657 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llxgm" podUID="b3e721b8-9665-4f46-9b9b-bf2346733bde" Oct 13 05:45:41.318772 containerd[1592]: time="2025-10-13T05:45:41.318705857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:41.319634 containerd[1592]: time="2025-10-13T05:45:41.319604463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:45:41.321007 containerd[1592]: time="2025-10-13T05:45:41.320914704Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:41.323498 containerd[1592]: time="2025-10-13T05:45:41.323440659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:41.324428 containerd[1592]: time="2025-10-13T05:45:41.324400410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.47984083s" Oct 13 05:45:41.324493 containerd[1592]: time="2025-10-13T05:45:41.324471063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:45:41.325908 containerd[1592]: time="2025-10-13T05:45:41.325888567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:45:41.339195 containerd[1592]: time="2025-10-13T05:45:41.339143994Z" level=info msg="CreateContainer within sandbox \"a88ab238c1ed275e5523b5c02c8912a5d60ce03a90fe0855b28f9dbbec0bff77\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:45:41.347210 containerd[1592]: time="2025-10-13T05:45:41.347167179Z" level=info msg="Container cb5d27c7d961cc6dc9e6e787b27ea10eabab5b011bd6b954360d3fe028028de4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:41.357628 containerd[1592]: time="2025-10-13T05:45:41.357587086Z" level=info msg="CreateContainer within sandbox \"a88ab238c1ed275e5523b5c02c8912a5d60ce03a90fe0855b28f9dbbec0bff77\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cb5d27c7d961cc6dc9e6e787b27ea10eabab5b011bd6b954360d3fe028028de4\"" Oct 13 05:45:41.358138 containerd[1592]: time="2025-10-13T05:45:41.358111586Z" level=info msg="StartContainer for \"cb5d27c7d961cc6dc9e6e787b27ea10eabab5b011bd6b954360d3fe028028de4\"" Oct 13 05:45:41.363727 containerd[1592]: time="2025-10-13T05:45:41.363680100Z" level=info msg="connecting to shim cb5d27c7d961cc6dc9e6e787b27ea10eabab5b011bd6b954360d3fe028028de4" address="unix:///run/containerd/s/ff5da72deaa828e92996850696766ab7437065be0f13321b2471d8f3ac05c951" protocol=ttrpc version=3 Oct 13 05:45:41.385941 systemd[1]: Started cri-containerd-cb5d27c7d961cc6dc9e6e787b27ea10eabab5b011bd6b954360d3fe028028de4.scope - libcontainer container cb5d27c7d961cc6dc9e6e787b27ea10eabab5b011bd6b954360d3fe028028de4. Oct 13 05:45:41.507268 containerd[1592]: time="2025-10-13T05:45:41.507224357Z" level=info msg="StartContainer for \"cb5d27c7d961cc6dc9e6e787b27ea10eabab5b011bd6b954360d3fe028028de4\" returns successfully" Oct 13 05:45:41.868332 kubelet[2730]: I1013 05:45:41.868227 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55c8b8855b-z8tfl" podStartSLOduration=2.386059904 podStartE2EDuration="4.868181261s" podCreationTimestamp="2025-10-13 05:45:37 +0000 UTC" firstStartedPulling="2025-10-13 05:45:38.843458649 +0000 UTC m=+21.271067010" lastFinishedPulling="2025-10-13 05:45:41.325580006 +0000 UTC m=+23.753188367" observedRunningTime="2025-10-13 05:45:41.866953164 +0000 UTC m=+24.294561525" watchObservedRunningTime="2025-10-13 05:45:41.868181261 +0000 UTC m=+24.295789612" Oct 13 05:45:41.869422 kubelet[2730]: E1013 05:45:41.869312 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.869422 kubelet[2730]: W1013 05:45:41.869350 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.869422 kubelet[2730]: E1013 05:45:41.869369 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.869809 kubelet[2730]: E1013 05:45:41.869601 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.869809 kubelet[2730]: W1013 05:45:41.869616 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.869809 kubelet[2730]: E1013 05:45:41.869625 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.870007 kubelet[2730]: E1013 05:45:41.869925 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.870007 kubelet[2730]: W1013 05:45:41.869935 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.870007 kubelet[2730]: E1013 05:45:41.869963 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.870311 kubelet[2730]: E1013 05:45:41.870205 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.870311 kubelet[2730]: W1013 05:45:41.870220 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.870311 kubelet[2730]: E1013 05:45:41.870229 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.870813 kubelet[2730]: E1013 05:45:41.870688 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.870813 kubelet[2730]: W1013 05:45:41.870713 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.870813 kubelet[2730]: E1013 05:45:41.870739 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.871215 kubelet[2730]: E1013 05:45:41.871186 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.871215 kubelet[2730]: W1013 05:45:41.871198 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.871215 kubelet[2730]: E1013 05:45:41.871208 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.871434 kubelet[2730]: E1013 05:45:41.871404 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.871434 kubelet[2730]: W1013 05:45:41.871412 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.871434 kubelet[2730]: E1013 05:45:41.871444 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.871837 kubelet[2730]: E1013 05:45:41.871683 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.871837 kubelet[2730]: W1013 05:45:41.871699 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.871837 kubelet[2730]: E1013 05:45:41.871711 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.872006 kubelet[2730]: E1013 05:45:41.871992 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.872084 kubelet[2730]: W1013 05:45:41.872052 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.872084 kubelet[2730]: E1013 05:45:41.872067 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.872322 kubelet[2730]: E1013 05:45:41.872300 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.872322 kubelet[2730]: W1013 05:45:41.872314 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.872322 kubelet[2730]: E1013 05:45:41.872325 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.872787 kubelet[2730]: E1013 05:45:41.872715 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.872839 kubelet[2730]: W1013 05:45:41.872796 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.872839 kubelet[2730]: E1013 05:45:41.872828 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.873345 kubelet[2730]: E1013 05:45:41.873274 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.873345 kubelet[2730]: W1013 05:45:41.873297 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.873345 kubelet[2730]: E1013 05:45:41.873315 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.873706 kubelet[2730]: E1013 05:45:41.873681 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.873706 kubelet[2730]: W1013 05:45:41.873698 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.873706 kubelet[2730]: E1013 05:45:41.873707 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.874647 kubelet[2730]: E1013 05:45:41.873950 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.874647 kubelet[2730]: W1013 05:45:41.873960 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.874647 kubelet[2730]: E1013 05:45:41.873979 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.874647 kubelet[2730]: E1013 05:45:41.874210 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.874647 kubelet[2730]: W1013 05:45:41.874226 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.874647 kubelet[2730]: E1013 05:45:41.874237 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.963702 kubelet[2730]: E1013 05:45:41.963657 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.963702 kubelet[2730]: W1013 05:45:41.963689 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.963702 kubelet[2730]: E1013 05:45:41.963714 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.964147 kubelet[2730]: E1013 05:45:41.964124 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.964147 kubelet[2730]: W1013 05:45:41.964140 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.964216 kubelet[2730]: E1013 05:45:41.964152 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.964407 kubelet[2730]: E1013 05:45:41.964386 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.964407 kubelet[2730]: W1013 05:45:41.964400 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.964471 kubelet[2730]: E1013 05:45:41.964413 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.964796 kubelet[2730]: E1013 05:45:41.964767 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.964833 kubelet[2730]: W1013 05:45:41.964791 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.964833 kubelet[2730]: E1013 05:45:41.964817 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.965034 kubelet[2730]: E1013 05:45:41.965021 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.965034 kubelet[2730]: W1013 05:45:41.965031 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.965094 kubelet[2730]: E1013 05:45:41.965039 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.965222 kubelet[2730]: E1013 05:45:41.965210 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.965222 kubelet[2730]: W1013 05:45:41.965219 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.965272 kubelet[2730]: E1013 05:45:41.965227 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.965525 kubelet[2730]: E1013 05:45:41.965504 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.965525 kubelet[2730]: W1013 05:45:41.965514 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.965525 kubelet[2730]: E1013 05:45:41.965522 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.965738 kubelet[2730]: E1013 05:45:41.965720 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.965738 kubelet[2730]: W1013 05:45:41.965731 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.965807 kubelet[2730]: E1013 05:45:41.965739 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.965950 kubelet[2730]: E1013 05:45:41.965937 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.965950 kubelet[2730]: W1013 05:45:41.965947 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.965996 kubelet[2730]: E1013 05:45:41.965962 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.966148 kubelet[2730]: E1013 05:45:41.966136 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.966148 kubelet[2730]: W1013 05:45:41.966145 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.966203 kubelet[2730]: E1013 05:45:41.966152 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.966340 kubelet[2730]: E1013 05:45:41.966328 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.966340 kubelet[2730]: W1013 05:45:41.966337 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.966382 kubelet[2730]: E1013 05:45:41.966345 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.966544 kubelet[2730]: E1013 05:45:41.966530 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.966544 kubelet[2730]: W1013 05:45:41.966539 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.966605 kubelet[2730]: E1013 05:45:41.966547 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.966897 kubelet[2730]: E1013 05:45:41.966874 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.966935 kubelet[2730]: W1013 05:45:41.966894 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.966935 kubelet[2730]: E1013 05:45:41.966910 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.967167 kubelet[2730]: E1013 05:45:41.967140 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.967167 kubelet[2730]: W1013 05:45:41.967158 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.967229 kubelet[2730]: E1013 05:45:41.967170 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.967414 kubelet[2730]: E1013 05:45:41.967395 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.967461 kubelet[2730]: W1013 05:45:41.967412 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.967461 kubelet[2730]: E1013 05:45:41.967424 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.967715 kubelet[2730]: E1013 05:45:41.967696 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.967715 kubelet[2730]: W1013 05:45:41.967712 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.967798 kubelet[2730]: E1013 05:45:41.967725 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.968163 kubelet[2730]: E1013 05:45:41.968132 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.968196 kubelet[2730]: W1013 05:45:41.968162 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.968196 kubelet[2730]: E1013 05:45:41.968190 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:41.968454 kubelet[2730]: E1013 05:45:41.968430 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:45:41.968454 kubelet[2730]: W1013 05:45:41.968450 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:45:41.968508 kubelet[2730]: E1013 05:45:41.968460 2730 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:45:42.700172 kubelet[2730]: E1013 05:45:42.700104 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llxgm" podUID="b3e721b8-9665-4f46-9b9b-bf2346733bde" Oct 13 05:45:42.762146 containerd[1592]: time="2025-10-13T05:45:42.762082316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:42.762909 containerd[1592]: time="2025-10-13T05:45:42.762878378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:45:42.764070 containerd[1592]: time="2025-10-13T05:45:42.764037072Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:42.766081 containerd[1592]: time="2025-10-13T05:45:42.766044658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:42.766565 containerd[1592]: time="2025-10-13T05:45:42.766522149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.440610328s" Oct 13 05:45:42.766565 containerd[1592]: time="2025-10-13T05:45:42.766563226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:45:42.774315 containerd[1592]: time="2025-10-13T05:45:42.774280079Z" level=info msg="CreateContainer within sandbox \"20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:45:42.782447 containerd[1592]: time="2025-10-13T05:45:42.782382397Z" level=info msg="Container 0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:42.793779 containerd[1592]: time="2025-10-13T05:45:42.793712664Z" level=info msg="CreateContainer within sandbox \"20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a\"" Oct 13 05:45:42.794249 containerd[1592]: time="2025-10-13T05:45:42.794193731Z" level=info msg="StartContainer for \"0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a\"" Oct 13 05:45:42.796117 containerd[1592]: time="2025-10-13T05:45:42.796082232Z" level=info msg="connecting to shim 0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a" address="unix:///run/containerd/s/01a44e71072210c48cf3edf413934a79bb531f419c8c4339ea0957b78ac534b5" protocol=ttrpc version=3 Oct 13 05:45:42.796791 kubelet[2730]: I1013 05:45:42.796733 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:45:42.818907 systemd[1]: Started cri-containerd-0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a.scope - libcontainer container 0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a. Oct 13 05:45:42.868270 containerd[1592]: time="2025-10-13T05:45:42.868227783Z" level=info msg="StartContainer for \"0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a\" returns successfully" Oct 13 05:45:42.879063 systemd[1]: cri-containerd-0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a.scope: Deactivated successfully. Oct 13 05:45:42.884045 containerd[1592]: time="2025-10-13T05:45:42.883972464Z" level=info msg="received exit event container_id:\"0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a\" id:\"0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a\" pid:3394 exited_at:{seconds:1760334342 nanos:883391518}" Oct 13 05:45:42.884286 containerd[1592]: time="2025-10-13T05:45:42.884235109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a\" id:\"0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a\" pid:3394 exited_at:{seconds:1760334342 nanos:883391518}" Oct 13 05:45:42.910169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a816fdb80473e7f5bc1c07cb8f891dd028c36e2df633bc02ae4a29b0d70224a-rootfs.mount: Deactivated successfully. Oct 13 05:45:43.801955 containerd[1592]: time="2025-10-13T05:45:43.801900784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:45:44.700455 kubelet[2730]: E1013 05:45:44.700372 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llxgm" podUID="b3e721b8-9665-4f46-9b9b-bf2346733bde" Oct 13 05:45:46.439558 containerd[1592]: time="2025-10-13T05:45:46.439474853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:46.440293 containerd[1592]: time="2025-10-13T05:45:46.440212572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:45:46.441446 containerd[1592]: time="2025-10-13T05:45:46.441409496Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:46.443445 containerd[1592]: time="2025-10-13T05:45:46.443397180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:46.443990 containerd[1592]: time="2025-10-13T05:45:46.443963327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.642019922s" Oct 13 05:45:46.443990 containerd[1592]: time="2025-10-13T05:45:46.443989997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:45:46.449369 containerd[1592]: time="2025-10-13T05:45:46.449308543Z" level=info msg="CreateContainer within sandbox \"20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:45:46.458314 containerd[1592]: time="2025-10-13T05:45:46.458277214Z" level=info msg="Container 79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:46.505668 containerd[1592]: time="2025-10-13T05:45:46.505620136Z" level=info msg="CreateContainer within sandbox \"20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50\"" Oct 13 05:45:46.506326 containerd[1592]: time="2025-10-13T05:45:46.506288385Z" level=info msg="StartContainer for \"79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50\"" Oct 13 05:45:46.508107 containerd[1592]: time="2025-10-13T05:45:46.508070562Z" level=info msg="connecting to shim 79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50" address="unix:///run/containerd/s/01a44e71072210c48cf3edf413934a79bb531f419c8c4339ea0957b78ac534b5" protocol=ttrpc version=3 Oct 13 05:45:46.535050 systemd[1]: Started cri-containerd-79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50.scope - libcontainer container 79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50. Oct 13 05:45:46.581693 containerd[1592]: time="2025-10-13T05:45:46.581636430Z" level=info msg="StartContainer for \"79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50\" returns successfully" Oct 13 05:45:46.700035 kubelet[2730]: E1013 05:45:46.699878 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-llxgm" podUID="b3e721b8-9665-4f46-9b9b-bf2346733bde" Oct 13 05:45:47.786164 containerd[1592]: time="2025-10-13T05:45:47.786100110Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:45:47.789945 systemd[1]: cri-containerd-79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50.scope: Deactivated successfully. Oct 13 05:45:47.790556 systemd[1]: cri-containerd-79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50.scope: Consumed 691ms CPU time, 180.6M memory peak, 3.1M read from disk, 171.3M written to disk. Oct 13 05:45:47.791488 containerd[1592]: time="2025-10-13T05:45:47.790775071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50\" id:\"79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50\" pid:3455 exited_at:{seconds:1760334347 nanos:790461000}" Oct 13 05:45:47.791488 containerd[1592]: time="2025-10-13T05:45:47.790907400Z" level=info msg="received exit event container_id:\"79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50\" id:\"79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50\" pid:3455 exited_at:{seconds:1760334347 nanos:790461000}" Oct 13 05:45:47.819216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79404886342afe6cca9d0085f4b079c6806bbe10d7f9f469fa8831c8cee71e50-rootfs.mount: Deactivated successfully. Oct 13 05:45:47.858591 kubelet[2730]: I1013 05:45:47.858498 2730 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:45:48.088053 systemd[1]: Created slice kubepods-burstable-pod93ec89cc_f558_4cf0_9863_2a1f49fa3d89.slice - libcontainer container kubepods-burstable-pod93ec89cc_f558_4cf0_9863_2a1f49fa3d89.slice. Oct 13 05:45:48.097211 systemd[1]: Created slice kubepods-besteffort-pod9d02daa2_baf6_4f84_868d_89d282edaf4a.slice - libcontainer container kubepods-besteffort-pod9d02daa2_baf6_4f84_868d_89d282edaf4a.slice. Oct 13 05:45:48.103620 systemd[1]: Created slice kubepods-besteffort-pod73e0b789_381c_4120_a5ae_3e793494b509.slice - libcontainer container kubepods-besteffort-pod73e0b789_381c_4120_a5ae_3e793494b509.slice. Oct 13 05:45:48.111374 systemd[1]: Created slice kubepods-burstable-pod29c98089_d455_4b83_980b_4b84e28d91dd.slice - libcontainer container kubepods-burstable-pod29c98089_d455_4b83_980b_4b84e28d91dd.slice. Oct 13 05:45:48.117762 systemd[1]: Created slice kubepods-besteffort-podce5ceb41_c5e9_40a3_b801_571d5a57bbde.slice - libcontainer container kubepods-besteffort-podce5ceb41_c5e9_40a3_b801_571d5a57bbde.slice. Oct 13 05:45:48.124735 systemd[1]: Created slice kubepods-besteffort-poda0dd8ef9_23cf_414b_8260_a256db3959dd.slice - libcontainer container kubepods-besteffort-poda0dd8ef9_23cf_414b_8260_a256db3959dd.slice. Oct 13 05:45:48.130888 systemd[1]: Created slice kubepods-besteffort-podd09be59c_7039_4e4e_8090_419990d9dff5.slice - libcontainer container kubepods-besteffort-podd09be59c_7039_4e4e_8090_419990d9dff5.slice. Oct 13 05:45:48.211604 kubelet[2730]: I1013 05:45:48.211520 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d09be59c-7039-4e4e-8090-419990d9dff5-goldmane-key-pair\") pod \"goldmane-854f97d977-hpt9p\" (UID: \"d09be59c-7039-4e4e-8090-419990d9dff5\") " pod="calico-system/goldmane-854f97d977-hpt9p" Oct 13 05:45:48.211604 kubelet[2730]: I1013 05:45:48.211576 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73e0b789-381c-4120-a5ae-3e793494b509-whisker-backend-key-pair\") pod \"whisker-bb9f76b44-rgvj4\" (UID: \"73e0b789-381c-4120-a5ae-3e793494b509\") " pod="calico-system/whisker-bb9f76b44-rgvj4" Oct 13 05:45:48.211604 kubelet[2730]: I1013 05:45:48.211597 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29c98089-d455-4b83-980b-4b84e28d91dd-config-volume\") pod \"coredns-66bc5c9577-wc44b\" (UID: \"29c98089-d455-4b83-980b-4b84e28d91dd\") " pod="kube-system/coredns-66bc5c9577-wc44b" Oct 13 05:45:48.211918 kubelet[2730]: I1013 05:45:48.211669 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwg9v\" (UniqueName: \"kubernetes.io/projected/29c98089-d455-4b83-980b-4b84e28d91dd-kube-api-access-lwg9v\") pod \"coredns-66bc5c9577-wc44b\" (UID: \"29c98089-d455-4b83-980b-4b84e28d91dd\") " pod="kube-system/coredns-66bc5c9577-wc44b" Oct 13 05:45:48.211918 kubelet[2730]: I1013 05:45:48.211688 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e0b789-381c-4120-a5ae-3e793494b509-whisker-ca-bundle\") pod \"whisker-bb9f76b44-rgvj4\" (UID: \"73e0b789-381c-4120-a5ae-3e793494b509\") " pod="calico-system/whisker-bb9f76b44-rgvj4" Oct 13 05:45:48.211918 kubelet[2730]: I1013 05:45:48.211702 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nslq6\" (UniqueName: \"kubernetes.io/projected/73e0b789-381c-4120-a5ae-3e793494b509-kube-api-access-nslq6\") pod \"whisker-bb9f76b44-rgvj4\" (UID: \"73e0b789-381c-4120-a5ae-3e793494b509\") " pod="calico-system/whisker-bb9f76b44-rgvj4" Oct 13 05:45:48.211918 kubelet[2730]: I1013 05:45:48.211782 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n4dk\" (UniqueName: \"kubernetes.io/projected/a0dd8ef9-23cf-414b-8260-a256db3959dd-kube-api-access-2n4dk\") pod \"calico-apiserver-5588688947-z4flx\" (UID: \"a0dd8ef9-23cf-414b-8260-a256db3959dd\") " pod="calico-apiserver/calico-apiserver-5588688947-z4flx" Oct 13 05:45:48.211918 kubelet[2730]: I1013 05:45:48.211802 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d09be59c-7039-4e4e-8090-419990d9dff5-goldmane-ca-bundle\") pod \"goldmane-854f97d977-hpt9p\" (UID: \"d09be59c-7039-4e4e-8090-419990d9dff5\") " pod="calico-system/goldmane-854f97d977-hpt9p" Oct 13 05:45:48.212114 kubelet[2730]: I1013 05:45:48.211822 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l44g\" (UniqueName: \"kubernetes.io/projected/d09be59c-7039-4e4e-8090-419990d9dff5-kube-api-access-4l44g\") pod \"goldmane-854f97d977-hpt9p\" (UID: \"d09be59c-7039-4e4e-8090-419990d9dff5\") " pod="calico-system/goldmane-854f97d977-hpt9p" Oct 13 05:45:48.212114 kubelet[2730]: I1013 05:45:48.211836 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8gwm\" (UniqueName: \"kubernetes.io/projected/93ec89cc-f558-4cf0-9863-2a1f49fa3d89-kube-api-access-c8gwm\") pod \"coredns-66bc5c9577-dmvc6\" (UID: \"93ec89cc-f558-4cf0-9863-2a1f49fa3d89\") " pod="kube-system/coredns-66bc5c9577-dmvc6" Oct 13 05:45:48.212114 kubelet[2730]: I1013 05:45:48.211891 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce5ceb41-c5e9-40a3-b801-571d5a57bbde-calico-apiserver-certs\") pod \"calico-apiserver-5588688947-xbkn2\" (UID: \"ce5ceb41-c5e9-40a3-b801-571d5a57bbde\") " pod="calico-apiserver/calico-apiserver-5588688947-xbkn2" Oct 13 05:45:48.212114 kubelet[2730]: I1013 05:45:48.211933 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56lfz\" (UniqueName: \"kubernetes.io/projected/ce5ceb41-c5e9-40a3-b801-571d5a57bbde-kube-api-access-56lfz\") pod \"calico-apiserver-5588688947-xbkn2\" (UID: \"ce5ceb41-c5e9-40a3-b801-571d5a57bbde\") " pod="calico-apiserver/calico-apiserver-5588688947-xbkn2" Oct 13 05:45:48.212114 kubelet[2730]: I1013 05:45:48.211952 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a0dd8ef9-23cf-414b-8260-a256db3959dd-calico-apiserver-certs\") pod \"calico-apiserver-5588688947-z4flx\" (UID: \"a0dd8ef9-23cf-414b-8260-a256db3959dd\") " pod="calico-apiserver/calico-apiserver-5588688947-z4flx" Oct 13 05:45:48.212279 kubelet[2730]: I1013 05:45:48.211971 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2pj\" (UniqueName: \"kubernetes.io/projected/9d02daa2-baf6-4f84-868d-89d282edaf4a-kube-api-access-vs2pj\") pod \"calico-kube-controllers-5df6956d4d-wxn6z\" (UID: \"9d02daa2-baf6-4f84-868d-89d282edaf4a\") " pod="calico-system/calico-kube-controllers-5df6956d4d-wxn6z" Oct 13 05:45:48.212279 kubelet[2730]: I1013 05:45:48.211991 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d09be59c-7039-4e4e-8090-419990d9dff5-config\") pod \"goldmane-854f97d977-hpt9p\" (UID: \"d09be59c-7039-4e4e-8090-419990d9dff5\") " pod="calico-system/goldmane-854f97d977-hpt9p" Oct 13 05:45:48.212279 kubelet[2730]: I1013 05:45:48.212009 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93ec89cc-f558-4cf0-9863-2a1f49fa3d89-config-volume\") pod \"coredns-66bc5c9577-dmvc6\" (UID: \"93ec89cc-f558-4cf0-9863-2a1f49fa3d89\") " pod="kube-system/coredns-66bc5c9577-dmvc6" Oct 13 05:45:48.212279 kubelet[2730]: I1013 05:45:48.212028 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d02daa2-baf6-4f84-868d-89d282edaf4a-tigera-ca-bundle\") pod \"calico-kube-controllers-5df6956d4d-wxn6z\" (UID: \"9d02daa2-baf6-4f84-868d-89d282edaf4a\") " pod="calico-system/calico-kube-controllers-5df6956d4d-wxn6z" Oct 13 05:45:48.399158 containerd[1592]: time="2025-10-13T05:45:48.399034622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmvc6,Uid:93ec89cc-f558-4cf0-9863-2a1f49fa3d89,Namespace:kube-system,Attempt:0,}" Oct 13 05:45:48.403446 containerd[1592]: time="2025-10-13T05:45:48.403416079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6956d4d-wxn6z,Uid:9d02daa2-baf6-4f84-868d-89d282edaf4a,Namespace:calico-system,Attempt:0,}" Oct 13 05:45:48.411022 containerd[1592]: time="2025-10-13T05:45:48.410939320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb9f76b44-rgvj4,Uid:73e0b789-381c-4120-a5ae-3e793494b509,Namespace:calico-system,Attempt:0,}" Oct 13 05:45:48.424333 containerd[1592]: time="2025-10-13T05:45:48.424269843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wc44b,Uid:29c98089-d455-4b83-980b-4b84e28d91dd,Namespace:kube-system,Attempt:0,}" Oct 13 05:45:48.427316 containerd[1592]: time="2025-10-13T05:45:48.427278857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-xbkn2,Uid:ce5ceb41-c5e9-40a3-b801-571d5a57bbde,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:45:48.433488 containerd[1592]: time="2025-10-13T05:45:48.433453600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-z4flx,Uid:a0dd8ef9-23cf-414b-8260-a256db3959dd,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:45:48.443427 containerd[1592]: time="2025-10-13T05:45:48.440898954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hpt9p,Uid:d09be59c-7039-4e4e-8090-419990d9dff5,Namespace:calico-system,Attempt:0,}" Oct 13 05:45:48.505552 containerd[1592]: time="2025-10-13T05:45:48.505484535Z" level=error msg="Failed to destroy network for sandbox \"bb09dcfaa6baeefe61e3c3a508206fda95eac2343a2ac484b4692a817af81239\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.513083 containerd[1592]: time="2025-10-13T05:45:48.513001564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb9f76b44-rgvj4,Uid:73e0b789-381c-4120-a5ae-3e793494b509,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb09dcfaa6baeefe61e3c3a508206fda95eac2343a2ac484b4692a817af81239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.513487 kubelet[2730]: E1013 05:45:48.513395 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb09dcfaa6baeefe61e3c3a508206fda95eac2343a2ac484b4692a817af81239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.513554 kubelet[2730]: E1013 05:45:48.513539 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb09dcfaa6baeefe61e3c3a508206fda95eac2343a2ac484b4692a817af81239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bb9f76b44-rgvj4" Oct 13 05:45:48.513582 kubelet[2730]: E1013 05:45:48.513565 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb09dcfaa6baeefe61e3c3a508206fda95eac2343a2ac484b4692a817af81239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bb9f76b44-rgvj4" Oct 13 05:45:48.513661 kubelet[2730]: E1013 05:45:48.513628 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bb9f76b44-rgvj4_calico-system(73e0b789-381c-4120-a5ae-3e793494b509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bb9f76b44-rgvj4_calico-system(73e0b789-381c-4120-a5ae-3e793494b509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb09dcfaa6baeefe61e3c3a508206fda95eac2343a2ac484b4692a817af81239\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bb9f76b44-rgvj4" podUID="73e0b789-381c-4120-a5ae-3e793494b509" Oct 13 05:45:48.531507 containerd[1592]: time="2025-10-13T05:45:48.531443117Z" level=error msg="Failed to destroy network for sandbox \"f0fa3c05d63dd893b48e2f5a1a60fdce6733beb6e351792ef551ec8c9e9d637d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.533270 containerd[1592]: time="2025-10-13T05:45:48.533230562Z" level=error msg="Failed to destroy network for sandbox \"cd8ca45302f3985fd773b63b798724461e95addae60fabc8595a7df10aa33856\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.533564 containerd[1592]: time="2025-10-13T05:45:48.533538130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6956d4d-wxn6z,Uid:9d02daa2-baf6-4f84-868d-89d282edaf4a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fa3c05d63dd893b48e2f5a1a60fdce6733beb6e351792ef551ec8c9e9d637d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.533969 kubelet[2730]: E1013 05:45:48.533930 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fa3c05d63dd893b48e2f5a1a60fdce6733beb6e351792ef551ec8c9e9d637d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.534080 kubelet[2730]: E1013 05:45:48.534064 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fa3c05d63dd893b48e2f5a1a60fdce6733beb6e351792ef551ec8c9e9d637d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5df6956d4d-wxn6z" Oct 13 05:45:48.534140 kubelet[2730]: E1013 05:45:48.534127 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0fa3c05d63dd893b48e2f5a1a60fdce6733beb6e351792ef551ec8c9e9d637d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5df6956d4d-wxn6z" Oct 13 05:45:48.534271 kubelet[2730]: E1013 05:45:48.534245 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5df6956d4d-wxn6z_calico-system(9d02daa2-baf6-4f84-868d-89d282edaf4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5df6956d4d-wxn6z_calico-system(9d02daa2-baf6-4f84-868d-89d282edaf4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0fa3c05d63dd893b48e2f5a1a60fdce6733beb6e351792ef551ec8c9e9d637d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5df6956d4d-wxn6z" podUID="9d02daa2-baf6-4f84-868d-89d282edaf4a" Oct 13 05:45:48.535630 containerd[1592]: time="2025-10-13T05:45:48.535599290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmvc6,Uid:93ec89cc-f558-4cf0-9863-2a1f49fa3d89,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8ca45302f3985fd773b63b798724461e95addae60fabc8595a7df10aa33856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.536265 kubelet[2730]: E1013 05:45:48.536198 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8ca45302f3985fd773b63b798724461e95addae60fabc8595a7df10aa33856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.536322 kubelet[2730]: E1013 05:45:48.536261 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8ca45302f3985fd773b63b798724461e95addae60fabc8595a7df10aa33856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dmvc6" Oct 13 05:45:48.536322 kubelet[2730]: E1013 05:45:48.536286 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8ca45302f3985fd773b63b798724461e95addae60fabc8595a7df10aa33856\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dmvc6" Oct 13 05:45:48.536393 kubelet[2730]: E1013 05:45:48.536354 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dmvc6_kube-system(93ec89cc-f558-4cf0-9863-2a1f49fa3d89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dmvc6_kube-system(93ec89cc-f558-4cf0-9863-2a1f49fa3d89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd8ca45302f3985fd773b63b798724461e95addae60fabc8595a7df10aa33856\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dmvc6" podUID="93ec89cc-f558-4cf0-9863-2a1f49fa3d89" Oct 13 05:45:48.544383 containerd[1592]: time="2025-10-13T05:45:48.544326217Z" level=error msg="Failed to destroy network for sandbox \"8afe8045d5252af4971aeb839d055af0fde16d6c27d975425ec9ffd572202841\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.544712 containerd[1592]: time="2025-10-13T05:45:48.544671297Z" level=error msg="Failed to destroy network for sandbox \"cc93614e4a98b72c1839de658aca2828b5ad625018e27b7ca1f04986155271a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.545735 containerd[1592]: time="2025-10-13T05:45:48.545691056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-z4flx,Uid:a0dd8ef9-23cf-414b-8260-a256db3959dd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8afe8045d5252af4971aeb839d055af0fde16d6c27d975425ec9ffd572202841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.546050 kubelet[2730]: E1013 05:45:48.546011 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8afe8045d5252af4971aeb839d055af0fde16d6c27d975425ec9ffd572202841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.546122 kubelet[2730]: E1013 05:45:48.546075 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8afe8045d5252af4971aeb839d055af0fde16d6c27d975425ec9ffd572202841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5588688947-z4flx" Oct 13 05:45:48.546122 kubelet[2730]: E1013 05:45:48.546097 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8afe8045d5252af4971aeb839d055af0fde16d6c27d975425ec9ffd572202841\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5588688947-z4flx" Oct 13 05:45:48.546195 kubelet[2730]: E1013 05:45:48.546160 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5588688947-z4flx_calico-apiserver(a0dd8ef9-23cf-414b-8260-a256db3959dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5588688947-z4flx_calico-apiserver(a0dd8ef9-23cf-414b-8260-a256db3959dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8afe8045d5252af4971aeb839d055af0fde16d6c27d975425ec9ffd572202841\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5588688947-z4flx" podUID="a0dd8ef9-23cf-414b-8260-a256db3959dd" Oct 13 05:45:48.547139 containerd[1592]: time="2025-10-13T05:45:48.547094097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hpt9p,Uid:d09be59c-7039-4e4e-8090-419990d9dff5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc93614e4a98b72c1839de658aca2828b5ad625018e27b7ca1f04986155271a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.547906 kubelet[2730]: E1013 05:45:48.547857 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc93614e4a98b72c1839de658aca2828b5ad625018e27b7ca1f04986155271a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.548116 kubelet[2730]: E1013 05:45:48.547917 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc93614e4a98b72c1839de658aca2828b5ad625018e27b7ca1f04986155271a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-hpt9p" Oct 13 05:45:48.548154 kubelet[2730]: E1013 05:45:48.548119 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc93614e4a98b72c1839de658aca2828b5ad625018e27b7ca1f04986155271a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-hpt9p" Oct 13 05:45:48.548209 kubelet[2730]: E1013 05:45:48.548189 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-hpt9p_calico-system(d09be59c-7039-4e4e-8090-419990d9dff5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-hpt9p_calico-system(d09be59c-7039-4e4e-8090-419990d9dff5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc93614e4a98b72c1839de658aca2828b5ad625018e27b7ca1f04986155271a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-hpt9p" podUID="d09be59c-7039-4e4e-8090-419990d9dff5" Oct 13 05:45:48.552484 containerd[1592]: time="2025-10-13T05:45:48.552424882Z" level=error msg="Failed to destroy network for sandbox \"63b7cb3952d1d503c87bb7f1d142e915275902923b9fbaac621a84e5eed6be1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.554392 containerd[1592]: time="2025-10-13T05:45:48.554341910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wc44b,Uid:29c98089-d455-4b83-980b-4b84e28d91dd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b7cb3952d1d503c87bb7f1d142e915275902923b9fbaac621a84e5eed6be1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.554973 kubelet[2730]: E1013 05:45:48.554935 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b7cb3952d1d503c87bb7f1d142e915275902923b9fbaac621a84e5eed6be1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.555045 kubelet[2730]: E1013 05:45:48.554997 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b7cb3952d1d503c87bb7f1d142e915275902923b9fbaac621a84e5eed6be1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wc44b" Oct 13 05:45:48.555045 kubelet[2730]: E1013 05:45:48.555017 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b7cb3952d1d503c87bb7f1d142e915275902923b9fbaac621a84e5eed6be1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wc44b" Oct 13 05:45:48.555108 kubelet[2730]: E1013 05:45:48.555082 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wc44b_kube-system(29c98089-d455-4b83-980b-4b84e28d91dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wc44b_kube-system(29c98089-d455-4b83-980b-4b84e28d91dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63b7cb3952d1d503c87bb7f1d142e915275902923b9fbaac621a84e5eed6be1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wc44b" podUID="29c98089-d455-4b83-980b-4b84e28d91dd" Oct 13 05:45:48.565820 containerd[1592]: time="2025-10-13T05:45:48.565772346Z" level=error msg="Failed to destroy network for sandbox \"61c1b904d8fcb26448517b7ff092798c2da3fe702dcb72deee5eff87752dc177\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.566988 containerd[1592]: time="2025-10-13T05:45:48.566957867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-xbkn2,Uid:ce5ceb41-c5e9-40a3-b801-571d5a57bbde,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c1b904d8fcb26448517b7ff092798c2da3fe702dcb72deee5eff87752dc177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.567235 kubelet[2730]: E1013 05:45:48.567191 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c1b904d8fcb26448517b7ff092798c2da3fe702dcb72deee5eff87752dc177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.567303 kubelet[2730]: E1013 05:45:48.567252 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c1b904d8fcb26448517b7ff092798c2da3fe702dcb72deee5eff87752dc177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5588688947-xbkn2" Oct 13 05:45:48.567303 kubelet[2730]: E1013 05:45:48.567275 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c1b904d8fcb26448517b7ff092798c2da3fe702dcb72deee5eff87752dc177\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5588688947-xbkn2" Oct 13 05:45:48.567377 kubelet[2730]: E1013 05:45:48.567342 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5588688947-xbkn2_calico-apiserver(ce5ceb41-c5e9-40a3-b801-571d5a57bbde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5588688947-xbkn2_calico-apiserver(ce5ceb41-c5e9-40a3-b801-571d5a57bbde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61c1b904d8fcb26448517b7ff092798c2da3fe702dcb72deee5eff87752dc177\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5588688947-xbkn2" podUID="ce5ceb41-c5e9-40a3-b801-571d5a57bbde" Oct 13 05:45:48.705818 systemd[1]: Created slice kubepods-besteffort-podb3e721b8_9665_4f46_9b9b_bf2346733bde.slice - libcontainer container kubepods-besteffort-podb3e721b8_9665_4f46_9b9b_bf2346733bde.slice. Oct 13 05:45:48.711103 containerd[1592]: time="2025-10-13T05:45:48.711067006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llxgm,Uid:b3e721b8-9665-4f46-9b9b-bf2346733bde,Namespace:calico-system,Attempt:0,}" Oct 13 05:45:48.765042 containerd[1592]: time="2025-10-13T05:45:48.764984817Z" level=error msg="Failed to destroy network for sandbox \"e3307d5d0cfa5ae8b541487f086f78b7ffde0be285d3bc0a1905dcb1477bf9ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.766432 containerd[1592]: time="2025-10-13T05:45:48.766394591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llxgm,Uid:b3e721b8-9665-4f46-9b9b-bf2346733bde,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3307d5d0cfa5ae8b541487f086f78b7ffde0be285d3bc0a1905dcb1477bf9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.766693 kubelet[2730]: E1013 05:45:48.766646 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3307d5d0cfa5ae8b541487f086f78b7ffde0be285d3bc0a1905dcb1477bf9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:45:48.766765 kubelet[2730]: E1013 05:45:48.766716 2730 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3307d5d0cfa5ae8b541487f086f78b7ffde0be285d3bc0a1905dcb1477bf9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-llxgm" Oct 13 05:45:48.766765 kubelet[2730]: E1013 05:45:48.766735 2730 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3307d5d0cfa5ae8b541487f086f78b7ffde0be285d3bc0a1905dcb1477bf9ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-llxgm" Oct 13 05:45:48.766833 kubelet[2730]: E1013 05:45:48.766811 2730 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-llxgm_calico-system(b3e721b8-9665-4f46-9b9b-bf2346733bde)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-llxgm_calico-system(b3e721b8-9665-4f46-9b9b-bf2346733bde)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3307d5d0cfa5ae8b541487f086f78b7ffde0be285d3bc0a1905dcb1477bf9ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-llxgm" podUID="b3e721b8-9665-4f46-9b9b-bf2346733bde" Oct 13 05:45:48.825642 containerd[1592]: time="2025-10-13T05:45:48.825448859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:45:53.213803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2966523461.mount: Deactivated successfully. Oct 13 05:45:53.555199 kubelet[2730]: I1013 05:45:53.555033 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:45:54.301662 containerd[1592]: time="2025-10-13T05:45:54.301597342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:54.304772 containerd[1592]: time="2025-10-13T05:45:54.303052397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:45:54.304772 containerd[1592]: time="2025-10-13T05:45:54.304140592Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:54.309727 containerd[1592]: time="2025-10-13T05:45:54.309502062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:54.311318 containerd[1592]: time="2025-10-13T05:45:54.310793641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.485203725s" Oct 13 05:45:54.311318 containerd[1592]: time="2025-10-13T05:45:54.311149709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:45:54.344393 containerd[1592]: time="2025-10-13T05:45:54.344328434Z" level=info msg="CreateContainer within sandbox \"20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:45:54.364481 containerd[1592]: time="2025-10-13T05:45:54.364419875Z" level=info msg="Container fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:54.390183 containerd[1592]: time="2025-10-13T05:45:54.390104101Z" level=info msg="CreateContainer within sandbox \"20a11f5fb4932195245a4ee3104b5c83e6b187293571bac126e2f0db13a5d579\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89\"" Oct 13 05:45:54.391441 containerd[1592]: time="2025-10-13T05:45:54.391345124Z" level=info msg="StartContainer for \"fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89\"" Oct 13 05:45:54.393794 containerd[1592]: time="2025-10-13T05:45:54.393739976Z" level=info msg="connecting to shim fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89" address="unix:///run/containerd/s/01a44e71072210c48cf3edf413934a79bb531f419c8c4339ea0957b78ac534b5" protocol=ttrpc version=3 Oct 13 05:45:54.421885 systemd[1]: Started cri-containerd-fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89.scope - libcontainer container fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89. Oct 13 05:45:54.478157 containerd[1592]: time="2025-10-13T05:45:54.478104870Z" level=info msg="StartContainer for \"fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89\" returns successfully" Oct 13 05:45:54.558141 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:45:54.558297 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:45:54.753481 kubelet[2730]: I1013 05:45:54.753410 2730 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e0b789-381c-4120-a5ae-3e793494b509-whisker-ca-bundle\") pod \"73e0b789-381c-4120-a5ae-3e793494b509\" (UID: \"73e0b789-381c-4120-a5ae-3e793494b509\") " Oct 13 05:45:54.753481 kubelet[2730]: I1013 05:45:54.753487 2730 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73e0b789-381c-4120-a5ae-3e793494b509-whisker-backend-key-pair\") pod \"73e0b789-381c-4120-a5ae-3e793494b509\" (UID: \"73e0b789-381c-4120-a5ae-3e793494b509\") " Oct 13 05:45:54.755270 kubelet[2730]: I1013 05:45:54.753520 2730 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nslq6\" (UniqueName: \"kubernetes.io/projected/73e0b789-381c-4120-a5ae-3e793494b509-kube-api-access-nslq6\") pod \"73e0b789-381c-4120-a5ae-3e793494b509\" (UID: \"73e0b789-381c-4120-a5ae-3e793494b509\") " Oct 13 05:45:54.755270 kubelet[2730]: I1013 05:45:54.754811 2730 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73e0b789-381c-4120-a5ae-3e793494b509-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "73e0b789-381c-4120-a5ae-3e793494b509" (UID: "73e0b789-381c-4120-a5ae-3e793494b509"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:45:54.758314 kubelet[2730]: I1013 05:45:54.758250 2730 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e0b789-381c-4120-a5ae-3e793494b509-kube-api-access-nslq6" (OuterVolumeSpecName: "kube-api-access-nslq6") pod "73e0b789-381c-4120-a5ae-3e793494b509" (UID: "73e0b789-381c-4120-a5ae-3e793494b509"). InnerVolumeSpecName "kube-api-access-nslq6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:45:54.759341 kubelet[2730]: I1013 05:45:54.759022 2730 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e0b789-381c-4120-a5ae-3e793494b509-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "73e0b789-381c-4120-a5ae-3e793494b509" (UID: "73e0b789-381c-4120-a5ae-3e793494b509"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:45:54.846361 systemd[1]: Removed slice kubepods-besteffort-pod73e0b789_381c_4120_a5ae_3e793494b509.slice - libcontainer container kubepods-besteffort-pod73e0b789_381c_4120_a5ae_3e793494b509.slice. Oct 13 05:45:54.854611 kubelet[2730]: I1013 05:45:54.854584 2730 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nslq6\" (UniqueName: \"kubernetes.io/projected/73e0b789-381c-4120-a5ae-3e793494b509-kube-api-access-nslq6\") on node \"localhost\" DevicePath \"\"" Oct 13 05:45:54.854726 kubelet[2730]: I1013 05:45:54.854713 2730 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e0b789-381c-4120-a5ae-3e793494b509-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:45:54.854812 kubelet[2730]: I1013 05:45:54.854793 2730 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/73e0b789-381c-4120-a5ae-3e793494b509-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:45:54.866683 kubelet[2730]: I1013 05:45:54.866349 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p65nb" podStartSLOduration=1.668158982 podStartE2EDuration="16.86633498s" podCreationTimestamp="2025-10-13 05:45:38 +0000 UTC" firstStartedPulling="2025-10-13 05:45:39.115253408 +0000 UTC m=+21.542861769" lastFinishedPulling="2025-10-13 05:45:54.313429406 +0000 UTC m=+36.741037767" observedRunningTime="2025-10-13 05:45:54.856617913 +0000 UTC m=+37.284226264" watchObservedRunningTime="2025-10-13 05:45:54.86633498 +0000 UTC m=+37.293943331" Oct 13 05:45:54.904615 systemd[1]: Created slice kubepods-besteffort-podc53a088e_b4ba_45b2_aed8_b5c3458378af.slice - libcontainer container kubepods-besteffort-podc53a088e_b4ba_45b2_aed8_b5c3458378af.slice. Oct 13 05:45:55.056331 kubelet[2730]: I1013 05:45:55.056146 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c53a088e-b4ba-45b2-aed8-b5c3458378af-whisker-ca-bundle\") pod \"whisker-fd575fcdc-2kzwh\" (UID: \"c53a088e-b4ba-45b2-aed8-b5c3458378af\") " pod="calico-system/whisker-fd575fcdc-2kzwh" Oct 13 05:45:55.056331 kubelet[2730]: I1013 05:45:55.056211 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmlx\" (UniqueName: \"kubernetes.io/projected/c53a088e-b4ba-45b2-aed8-b5c3458378af-kube-api-access-8gmlx\") pod \"whisker-fd575fcdc-2kzwh\" (UID: \"c53a088e-b4ba-45b2-aed8-b5c3458378af\") " pod="calico-system/whisker-fd575fcdc-2kzwh" Oct 13 05:45:55.056331 kubelet[2730]: I1013 05:45:55.056236 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c53a088e-b4ba-45b2-aed8-b5c3458378af-whisker-backend-key-pair\") pod \"whisker-fd575fcdc-2kzwh\" (UID: \"c53a088e-b4ba-45b2-aed8-b5c3458378af\") " pod="calico-system/whisker-fd575fcdc-2kzwh" Oct 13 05:45:55.213579 containerd[1592]: time="2025-10-13T05:45:55.213512187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd575fcdc-2kzwh,Uid:c53a088e-b4ba-45b2-aed8-b5c3458378af,Namespace:calico-system,Attempt:0,}" Oct 13 05:45:55.327522 systemd[1]: var-lib-kubelet-pods-73e0b789\x2d381c\x2d4120\x2da5ae\x2d3e793494b509-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnslq6.mount: Deactivated successfully. Oct 13 05:45:55.327639 systemd[1]: var-lib-kubelet-pods-73e0b789\x2d381c\x2d4120\x2da5ae\x2d3e793494b509-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:45:55.355105 systemd-networkd[1491]: calic0eb6d4795d: Link UP Oct 13 05:45:55.355871 systemd-networkd[1491]: calic0eb6d4795d: Gained carrier Oct 13 05:45:55.370840 containerd[1592]: 2025-10-13 05:45:55.237 [INFO][3836] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:45:55.370840 containerd[1592]: 2025-10-13 05:45:55.254 [INFO][3836] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--fd575fcdc--2kzwh-eth0 whisker-fd575fcdc- calico-system c53a088e-b4ba-45b2-aed8-b5c3458378af 900 0 2025-10-13 05:45:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fd575fcdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-fd575fcdc-2kzwh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic0eb6d4795d [] [] }} ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-" Oct 13 05:45:55.370840 containerd[1592]: 2025-10-13 05:45:55.254 [INFO][3836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" Oct 13 05:45:55.370840 containerd[1592]: 2025-10-13 05:45:55.313 [INFO][3852] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" HandleID="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Workload="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.314 [INFO][3852] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" HandleID="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Workload="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ee20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-fd575fcdc-2kzwh", "timestamp":"2025-10-13 05:45:55.313836452 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.314 [INFO][3852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.315 [INFO][3852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.315 [INFO][3852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.321 [INFO][3852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" host="localhost" Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.327 [INFO][3852] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.330 [INFO][3852] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.332 [INFO][3852] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.334 [INFO][3852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:45:55.371331 containerd[1592]: 2025-10-13 05:45:55.334 [INFO][3852] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" host="localhost" Oct 13 05:45:55.371549 containerd[1592]: 2025-10-13 05:45:55.335 [INFO][3852] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce Oct 13 05:45:55.371549 containerd[1592]: 2025-10-13 05:45:55.339 [INFO][3852] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" host="localhost" Oct 13 05:45:55.371549 containerd[1592]: 2025-10-13 05:45:55.343 [INFO][3852] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" host="localhost" Oct 13 05:45:55.371549 containerd[1592]: 2025-10-13 05:45:55.343 [INFO][3852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" host="localhost" Oct 13 05:45:55.371549 containerd[1592]: 2025-10-13 05:45:55.343 [INFO][3852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:45:55.371549 containerd[1592]: 2025-10-13 05:45:55.343 [INFO][3852] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" HandleID="k8s-pod-network.30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Workload="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" Oct 13 05:45:55.371671 containerd[1592]: 2025-10-13 05:45:55.347 [INFO][3836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fd575fcdc--2kzwh-eth0", GenerateName:"whisker-fd575fcdc-", Namespace:"calico-system", SelfLink:"", UID:"c53a088e-b4ba-45b2-aed8-b5c3458378af", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fd575fcdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-fd575fcdc-2kzwh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic0eb6d4795d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:45:55.371671 containerd[1592]: 2025-10-13 05:45:55.347 [INFO][3836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" Oct 13 05:45:55.371789 containerd[1592]: 2025-10-13 05:45:55.347 [INFO][3836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0eb6d4795d ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" Oct 13 05:45:55.371789 containerd[1592]: 2025-10-13 05:45:55.356 [INFO][3836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" Oct 13 05:45:55.371838 containerd[1592]: 2025-10-13 05:45:55.357 [INFO][3836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--fd575fcdc--2kzwh-eth0", GenerateName:"whisker-fd575fcdc-", Namespace:"calico-system", SelfLink:"", UID:"c53a088e-b4ba-45b2-aed8-b5c3458378af", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fd575fcdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce", Pod:"whisker-fd575fcdc-2kzwh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic0eb6d4795d", MAC:"a2:cd:95:2f:d4:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:45:55.371887 containerd[1592]: 2025-10-13 05:45:55.367 [INFO][3836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" Namespace="calico-system" Pod="whisker-fd575fcdc-2kzwh" WorkloadEndpoint="localhost-k8s-whisker--fd575fcdc--2kzwh-eth0" Oct 13 05:45:55.440273 containerd[1592]: time="2025-10-13T05:45:55.440204029Z" level=info msg="connecting to shim 30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce" address="unix:///run/containerd/s/98ecaa35ad50785b7361765659ea83d7a50052e55d071f0d5b6bda68c5473e5c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:55.468024 systemd[1]: Started cri-containerd-30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce.scope - libcontainer container 30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce. Oct 13 05:45:55.482044 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:45:55.511805 containerd[1592]: time="2025-10-13T05:45:55.511725496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd575fcdc-2kzwh,Uid:c53a088e-b4ba-45b2-aed8-b5c3458378af,Namespace:calico-system,Attempt:0,} returns sandbox id \"30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce\"" Oct 13 05:45:55.513761 containerd[1592]: time="2025-10-13T05:45:55.513699676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:45:55.702632 kubelet[2730]: I1013 05:45:55.702585 2730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e0b789-381c-4120-a5ae-3e793494b509" path="/var/lib/kubelet/pods/73e0b789-381c-4120-a5ae-3e793494b509/volumes" Oct 13 05:45:56.352729 systemd-networkd[1491]: vxlan.calico: Link UP Oct 13 05:45:56.352740 systemd-networkd[1491]: vxlan.calico: Gained carrier Oct 13 05:45:56.939098 containerd[1592]: time="2025-10-13T05:45:56.939044194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:56.939793 containerd[1592]: time="2025-10-13T05:45:56.939769006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:45:56.940847 containerd[1592]: time="2025-10-13T05:45:56.940816625Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:56.942799 containerd[1592]: time="2025-10-13T05:45:56.942771318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:56.943347 containerd[1592]: time="2025-10-13T05:45:56.943321623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.42957104s" Oct 13 05:45:56.943391 containerd[1592]: time="2025-10-13T05:45:56.943347942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:45:56.948719 containerd[1592]: time="2025-10-13T05:45:56.948667479Z" level=info msg="CreateContainer within sandbox \"30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:45:56.959429 containerd[1592]: time="2025-10-13T05:45:56.959379652Z" level=info msg="Container b89b74898e703c9e7e3d0481cfe95989e1cf85f33cba6efa5af3b3c5c0eb159e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:56.967267 containerd[1592]: time="2025-10-13T05:45:56.967213314Z" level=info msg="CreateContainer within sandbox \"30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b89b74898e703c9e7e3d0481cfe95989e1cf85f33cba6efa5af3b3c5c0eb159e\"" Oct 13 05:45:56.967763 containerd[1592]: time="2025-10-13T05:45:56.967722671Z" level=info msg="StartContainer for \"b89b74898e703c9e7e3d0481cfe95989e1cf85f33cba6efa5af3b3c5c0eb159e\"" Oct 13 05:45:56.968787 containerd[1592]: time="2025-10-13T05:45:56.968763487Z" level=info msg="connecting to shim b89b74898e703c9e7e3d0481cfe95989e1cf85f33cba6efa5af3b3c5c0eb159e" address="unix:///run/containerd/s/98ecaa35ad50785b7361765659ea83d7a50052e55d071f0d5b6bda68c5473e5c" protocol=ttrpc version=3 Oct 13 05:45:56.989874 systemd[1]: Started cri-containerd-b89b74898e703c9e7e3d0481cfe95989e1cf85f33cba6efa5af3b3c5c0eb159e.scope - libcontainer container b89b74898e703c9e7e3d0481cfe95989e1cf85f33cba6efa5af3b3c5c0eb159e. Oct 13 05:45:57.037165 containerd[1592]: time="2025-10-13T05:45:57.037123781Z" level=info msg="StartContainer for \"b89b74898e703c9e7e3d0481cfe95989e1cf85f33cba6efa5af3b3c5c0eb159e\" returns successfully" Oct 13 05:45:57.039285 containerd[1592]: time="2025-10-13T05:45:57.039262801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:45:57.255929 systemd-networkd[1491]: calic0eb6d4795d: Gained IPv6LL Oct 13 05:45:58.280508 systemd-networkd[1491]: vxlan.calico: Gained IPv6LL Oct 13 05:45:58.427415 kubelet[2730]: I1013 05:45:58.427333 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:45:58.704599 containerd[1592]: time="2025-10-13T05:45:58.704525286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-z4flx,Uid:a0dd8ef9-23cf-414b-8260-a256db3959dd,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:45:58.709295 containerd[1592]: time="2025-10-13T05:45:58.709258318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-xbkn2,Uid:ce5ceb41-c5e9-40a3-b801-571d5a57bbde,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:45:58.737716 containerd[1592]: time="2025-10-13T05:45:58.737654791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89\" id:\"d1f2732d84008ed2834977353ba30e3e26171843cf80cdd735ba1e73fb0cd9eb\" pid:4172 exited_at:{seconds:1760334358 nanos:736940149}" Oct 13 05:45:58.874737 systemd-networkd[1491]: cali9d71eabfd49: Link UP Oct 13 05:45:58.875782 systemd-networkd[1491]: cali9d71eabfd49: Gained carrier Oct 13 05:45:58.893384 containerd[1592]: 2025-10-13 05:45:58.779 [INFO][4185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5588688947--z4flx-eth0 calico-apiserver-5588688947- calico-apiserver a0dd8ef9-23cf-414b-8260-a256db3959dd 833 0 2025-10-13 05:45:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5588688947 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5588688947-z4flx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d71eabfd49 [] [] }} ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-" Oct 13 05:45:58.893384 containerd[1592]: 2025-10-13 05:45:58.779 [INFO][4185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" Oct 13 05:45:58.893384 containerd[1592]: 2025-10-13 05:45:58.818 [INFO][4231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" HandleID="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Workload="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.818 [INFO][4231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" HandleID="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Workload="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5588688947-z4flx", "timestamp":"2025-10-13 05:45:58.818559999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.818 [INFO][4231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.818 [INFO][4231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.818 [INFO][4231] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.827 [INFO][4231] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" host="localhost" Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.834 [INFO][4231] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.840 [INFO][4231] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.843 [INFO][4231] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.846 [INFO][4231] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:45:58.893900 containerd[1592]: 2025-10-13 05:45:58.846 [INFO][4231] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" host="localhost" Oct 13 05:45:58.894160 containerd[1592]: 2025-10-13 05:45:58.849 [INFO][4231] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12 Oct 13 05:45:58.894160 containerd[1592]: 2025-10-13 05:45:58.853 [INFO][4231] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" host="localhost" Oct 13 05:45:58.894160 containerd[1592]: 2025-10-13 05:45:58.861 [INFO][4231] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" host="localhost" Oct 13 05:45:58.894160 containerd[1592]: 2025-10-13 05:45:58.862 [INFO][4231] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" host="localhost" Oct 13 05:45:58.894160 containerd[1592]: 2025-10-13 05:45:58.862 [INFO][4231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:45:58.894160 containerd[1592]: 2025-10-13 05:45:58.862 [INFO][4231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" HandleID="k8s-pod-network.6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Workload="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" Oct 13 05:45:58.894277 containerd[1592]: 2025-10-13 05:45:58.867 [INFO][4185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5588688947--z4flx-eth0", GenerateName:"calico-apiserver-5588688947-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0dd8ef9-23cf-414b-8260-a256db3959dd", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5588688947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5588688947-z4flx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d71eabfd49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:45:58.894345 containerd[1592]: 2025-10-13 05:45:58.867 [INFO][4185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" Oct 13 05:45:58.894345 containerd[1592]: 2025-10-13 05:45:58.867 [INFO][4185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d71eabfd49 ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" Oct 13 05:45:58.894345 containerd[1592]: 2025-10-13 05:45:58.876 [INFO][4185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" Oct 13 05:45:58.894504 containerd[1592]: 2025-10-13 05:45:58.876 [INFO][4185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5588688947--z4flx-eth0", GenerateName:"calico-apiserver-5588688947-", Namespace:"calico-apiserver", SelfLink:"", UID:"a0dd8ef9-23cf-414b-8260-a256db3959dd", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5588688947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12", Pod:"calico-apiserver-5588688947-z4flx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d71eabfd49", MAC:"f2:31:7e:e7:b8:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:45:58.894570 containerd[1592]: 2025-10-13 05:45:58.888 [INFO][4185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-z4flx" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--z4flx-eth0" Oct 13 05:45:58.905403 containerd[1592]: time="2025-10-13T05:45:58.905355313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89\" id:\"1d8f765d7c2670437c65c464d5882ec5be5d03ab548aba662c4ff5710e8ec31e\" pid:4223 exited_at:{seconds:1760334358 nanos:903958248}" Oct 13 05:45:58.974427 systemd-networkd[1491]: calic2417136eeb: Link UP Oct 13 05:45:58.975131 systemd-networkd[1491]: calic2417136eeb: Gained carrier Oct 13 05:45:58.997231 containerd[1592]: time="2025-10-13T05:45:58.997165750Z" level=info msg="connecting to shim 6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12" address="unix:///run/containerd/s/8d57e58ac6eb09550edbb88bd7d4d9339380b49cc6d6b92fa772037e340e600a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:59.005370 containerd[1592]: 2025-10-13 05:45:58.798 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0 calico-apiserver-5588688947- calico-apiserver ce5ceb41-c5e9-40a3-b801-571d5a57bbde 832 0 2025-10-13 05:45:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5588688947 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5588688947-xbkn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic2417136eeb [] [] }} ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-" Oct 13 05:45:59.005370 containerd[1592]: 2025-10-13 05:45:58.799 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" Oct 13 05:45:59.005370 containerd[1592]: 2025-10-13 05:45:58.832 [INFO][4246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" HandleID="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Workload="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.832 [INFO][4246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" HandleID="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Workload="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5588688947-xbkn2", "timestamp":"2025-10-13 05:45:58.832677684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.833 [INFO][4246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.862 [INFO][4246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.863 [INFO][4246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.927 [INFO][4246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" host="localhost" Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.935 [INFO][4246] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.941 [INFO][4246] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.944 [INFO][4246] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.946 [INFO][4246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:45:59.005580 containerd[1592]: 2025-10-13 05:45:58.946 [INFO][4246] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" host="localhost" Oct 13 05:45:59.005836 containerd[1592]: 2025-10-13 05:45:58.948 [INFO][4246] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329 Oct 13 05:45:59.005836 containerd[1592]: 2025-10-13 05:45:58.954 [INFO][4246] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" host="localhost" Oct 13 05:45:59.005836 containerd[1592]: 2025-10-13 05:45:58.962 [INFO][4246] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" host="localhost" Oct 13 05:45:59.005836 containerd[1592]: 2025-10-13 05:45:58.962 [INFO][4246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" host="localhost" Oct 13 05:45:59.005836 containerd[1592]: 2025-10-13 05:45:58.962 [INFO][4246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:45:59.005836 containerd[1592]: 2025-10-13 05:45:58.962 [INFO][4246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" HandleID="k8s-pod-network.bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Workload="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" Oct 13 05:45:59.005957 containerd[1592]: 2025-10-13 05:45:58.967 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0", GenerateName:"calico-apiserver-5588688947-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce5ceb41-c5e9-40a3-b801-571d5a57bbde", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5588688947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5588688947-xbkn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2417136eeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:45:59.006010 containerd[1592]: 2025-10-13 05:45:58.967 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" Oct 13 05:45:59.006010 containerd[1592]: 2025-10-13 05:45:58.967 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2417136eeb ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" Oct 13 05:45:59.006010 containerd[1592]: 2025-10-13 05:45:58.977 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" Oct 13 05:45:59.006089 containerd[1592]: 2025-10-13 05:45:58.979 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0", GenerateName:"calico-apiserver-5588688947-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce5ceb41-c5e9-40a3-b801-571d5a57bbde", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5588688947", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329", Pod:"calico-apiserver-5588688947-xbkn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic2417136eeb", MAC:"42:10:59:6d:54:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:45:59.006139 containerd[1592]: 2025-10-13 05:45:58.996 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" Namespace="calico-apiserver" Pod="calico-apiserver-5588688947-xbkn2" WorkloadEndpoint="localhost-k8s-calico--apiserver--5588688947--xbkn2-eth0" Oct 13 05:45:59.041152 systemd[1]: Started cri-containerd-6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12.scope - libcontainer container 6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12. Oct 13 05:45:59.050741 containerd[1592]: time="2025-10-13T05:45:59.050696053Z" level=info msg="connecting to shim bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329" address="unix:///run/containerd/s/d11441c30b23f40016f898707d6a6b37aeb0bc1d85a43d1fea5747b4b8d42ad8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:45:59.065517 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:45:59.087947 systemd[1]: Started cri-containerd-bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329.scope - libcontainer container bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329. Oct 13 05:45:59.109120 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:45:59.131958 containerd[1592]: time="2025-10-13T05:45:59.131902788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-z4flx,Uid:a0dd8ef9-23cf-414b-8260-a256db3959dd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12\"" Oct 13 05:45:59.217185 containerd[1592]: time="2025-10-13T05:45:59.217135455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5588688947-xbkn2,Uid:ce5ceb41-c5e9-40a3-b801-571d5a57bbde,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329\"" Oct 13 05:45:59.375480 containerd[1592]: time="2025-10-13T05:45:59.375409309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:59.376181 containerd[1592]: time="2025-10-13T05:45:59.376128821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:45:59.377479 containerd[1592]: time="2025-10-13T05:45:59.377426258Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:59.379556 containerd[1592]: time="2025-10-13T05:45:59.379520783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:45:59.380172 containerd[1592]: time="2025-10-13T05:45:59.380136309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.340846206s" Oct 13 05:45:59.380219 containerd[1592]: time="2025-10-13T05:45:59.380174601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:45:59.381922 containerd[1592]: time="2025-10-13T05:45:59.381883721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:45:59.386224 containerd[1592]: time="2025-10-13T05:45:59.386187036Z" level=info msg="CreateContainer within sandbox \"30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:45:59.395311 containerd[1592]: time="2025-10-13T05:45:59.395256745Z" level=info msg="Container c0cd6bb51ad80d160a3941491f79a54617a11178651c3bd5e5d77f7116b4a789: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:45:59.405466 containerd[1592]: time="2025-10-13T05:45:59.405407353Z" level=info msg="CreateContainer within sandbox \"30b7fb6c1c7d860078266403186f0c28629813af182093730d91659b5d1c5dce\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c0cd6bb51ad80d160a3941491f79a54617a11178651c3bd5e5d77f7116b4a789\"" Oct 13 05:45:59.405977 containerd[1592]: time="2025-10-13T05:45:59.405948690Z" level=info msg="StartContainer for \"c0cd6bb51ad80d160a3941491f79a54617a11178651c3bd5e5d77f7116b4a789\"" Oct 13 05:45:59.408104 containerd[1592]: time="2025-10-13T05:45:59.408079493Z" level=info msg="connecting to shim c0cd6bb51ad80d160a3941491f79a54617a11178651c3bd5e5d77f7116b4a789" address="unix:///run/containerd/s/98ecaa35ad50785b7361765659ea83d7a50052e55d071f0d5b6bda68c5473e5c" protocol=ttrpc version=3 Oct 13 05:45:59.434108 systemd[1]: Started cri-containerd-c0cd6bb51ad80d160a3941491f79a54617a11178651c3bd5e5d77f7116b4a789.scope - libcontainer container c0cd6bb51ad80d160a3941491f79a54617a11178651c3bd5e5d77f7116b4a789. Oct 13 05:45:59.496108 containerd[1592]: time="2025-10-13T05:45:59.496059081Z" level=info msg="StartContainer for \"c0cd6bb51ad80d160a3941491f79a54617a11178651c3bd5e5d77f7116b4a789\" returns successfully" Oct 13 05:45:59.638469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1345018067.mount: Deactivated successfully. Oct 13 05:45:59.704783 containerd[1592]: time="2025-10-13T05:45:59.704720021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmvc6,Uid:93ec89cc-f558-4cf0-9863-2a1f49fa3d89,Namespace:kube-system,Attempt:0,}" Oct 13 05:45:59.944997 systemd-networkd[1491]: cali9d71eabfd49: Gained IPv6LL Oct 13 05:45:59.989505 systemd-networkd[1491]: cali2f80b4e6847: Link UP Oct 13 05:45:59.989823 systemd-networkd[1491]: cali2f80b4e6847: Gained carrier Oct 13 05:46:00.010092 kubelet[2730]: I1013 05:46:00.010004 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fd575fcdc-2kzwh" podStartSLOduration=2.142165178 podStartE2EDuration="6.009951777s" podCreationTimestamp="2025-10-13 05:45:54 +0000 UTC" firstStartedPulling="2025-10-13 05:45:55.513398981 +0000 UTC m=+37.941007342" lastFinishedPulling="2025-10-13 05:45:59.38118558 +0000 UTC m=+41.808793941" observedRunningTime="2025-10-13 05:46:00.007904682 +0000 UTC m=+42.435513043" watchObservedRunningTime="2025-10-13 05:46:00.009951777 +0000 UTC m=+42.437560139" Oct 13 05:46:00.022089 containerd[1592]: 2025-10-13 05:45:59.751 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dmvc6-eth0 coredns-66bc5c9577- kube-system 93ec89cc-f558-4cf0-9863-2a1f49fa3d89 824 0 2025-10-13 05:45:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dmvc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f80b4e6847 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-" Oct 13 05:46:00.022089 containerd[1592]: 2025-10-13 05:45:59.754 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" Oct 13 05:46:00.022089 containerd[1592]: 2025-10-13 05:45:59.787 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" HandleID="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Workload="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.817 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" HandleID="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Workload="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050cb60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dmvc6", "timestamp":"2025-10-13 05:45:59.787681783 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.817 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.817 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.817 [INFO][4417] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.825 [INFO][4417] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" host="localhost" Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.830 [INFO][4417] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.838 [INFO][4417] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.840 [INFO][4417] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.844 [INFO][4417] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:00.022342 containerd[1592]: 2025-10-13 05:45:59.844 [INFO][4417] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" host="localhost" Oct 13 05:46:00.022572 containerd[1592]: 2025-10-13 05:45:59.845 [INFO][4417] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7 Oct 13 05:46:00.022572 containerd[1592]: 2025-10-13 05:45:59.885 [INFO][4417] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" host="localhost" Oct 13 05:46:00.022572 containerd[1592]: 2025-10-13 05:45:59.983 [INFO][4417] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" host="localhost" Oct 13 05:46:00.022572 containerd[1592]: 2025-10-13 05:45:59.983 [INFO][4417] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" host="localhost" Oct 13 05:46:00.022572 containerd[1592]: 2025-10-13 05:45:59.983 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:46:00.022572 containerd[1592]: 2025-10-13 05:45:59.983 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" HandleID="k8s-pod-network.20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Workload="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" Oct 13 05:46:00.022693 containerd[1592]: 2025-10-13 05:45:59.986 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dmvc6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"93ec89cc-f558-4cf0-9863-2a1f49fa3d89", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dmvc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f80b4e6847", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:00.022693 containerd[1592]: 2025-10-13 05:45:59.986 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" Oct 13 05:46:00.022693 containerd[1592]: 2025-10-13 05:45:59.986 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f80b4e6847 ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" Oct 13 05:46:00.022693 containerd[1592]: 2025-10-13 05:45:59.990 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" Oct 13 05:46:00.022693 containerd[1592]: 2025-10-13 05:45:59.990 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dmvc6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"93ec89cc-f558-4cf0-9863-2a1f49fa3d89", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7", Pod:"coredns-66bc5c9577-dmvc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f80b4e6847", MAC:"56:fc:9d:7d:65:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:00.022693 containerd[1592]: 2025-10-13 05:46:00.009 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" Namespace="kube-system" Pod="coredns-66bc5c9577-dmvc6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dmvc6-eth0" Oct 13 05:46:00.073495 containerd[1592]: time="2025-10-13T05:46:00.073407230Z" level=info msg="connecting to shim 20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7" address="unix:///run/containerd/s/194ccbc294a6979e0606444181e86a7e5e88ec0b206326dcb109660964042ad2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:46:00.128339 systemd[1]: Started cri-containerd-20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7.scope - libcontainer container 20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7. Oct 13 05:46:00.136921 systemd-networkd[1491]: calic2417136eeb: Gained IPv6LL Oct 13 05:46:00.151486 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:46:00.200185 containerd[1592]: time="2025-10-13T05:46:00.199912264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dmvc6,Uid:93ec89cc-f558-4cf0-9863-2a1f49fa3d89,Namespace:kube-system,Attempt:0,} returns sandbox id \"20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7\"" Oct 13 05:46:00.208566 containerd[1592]: time="2025-10-13T05:46:00.208513681Z" level=info msg="CreateContainer within sandbox \"20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:46:00.227647 containerd[1592]: time="2025-10-13T05:46:00.227259903Z" level=info msg="Container c2ef1c308eb107257f380182861817c86de376e94dda8909798c878b406ccdef: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:00.235075 containerd[1592]: time="2025-10-13T05:46:00.235023345Z" level=info msg="CreateContainer within sandbox \"20b5bcd92afb6d9e09a9820413422dff52a59a2137e3f88601e42e266b400eb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c2ef1c308eb107257f380182861817c86de376e94dda8909798c878b406ccdef\"" Oct 13 05:46:00.235718 containerd[1592]: time="2025-10-13T05:46:00.235696180Z" level=info msg="StartContainer for \"c2ef1c308eb107257f380182861817c86de376e94dda8909798c878b406ccdef\"" Oct 13 05:46:00.236632 containerd[1592]: time="2025-10-13T05:46:00.236611058Z" level=info msg="connecting to shim c2ef1c308eb107257f380182861817c86de376e94dda8909798c878b406ccdef" address="unix:///run/containerd/s/194ccbc294a6979e0606444181e86a7e5e88ec0b206326dcb109660964042ad2" protocol=ttrpc version=3 Oct 13 05:46:00.264041 systemd[1]: Started cri-containerd-c2ef1c308eb107257f380182861817c86de376e94dda8909798c878b406ccdef.scope - libcontainer container c2ef1c308eb107257f380182861817c86de376e94dda8909798c878b406ccdef. Oct 13 05:46:00.311333 containerd[1592]: time="2025-10-13T05:46:00.311241963Z" level=info msg="StartContainer for \"c2ef1c308eb107257f380182861817c86de376e94dda8909798c878b406ccdef\" returns successfully" Oct 13 05:46:00.633648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091885867.mount: Deactivated successfully. Oct 13 05:46:01.006875 kubelet[2730]: I1013 05:46:01.006096 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dmvc6" podStartSLOduration=36.006073765 podStartE2EDuration="36.006073765s" podCreationTimestamp="2025-10-13 05:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:46:01.004818918 +0000 UTC m=+43.432427279" watchObservedRunningTime="2025-10-13 05:46:01.006073765 +0000 UTC m=+43.433682126" Oct 13 05:46:01.545814 systemd-networkd[1491]: cali2f80b4e6847: Gained IPv6LL Oct 13 05:46:01.826273 containerd[1592]: time="2025-10-13T05:46:01.826038121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wc44b,Uid:29c98089-d455-4b83-980b-4b84e28d91dd,Namespace:kube-system,Attempt:0,}" Oct 13 05:46:01.831312 containerd[1592]: time="2025-10-13T05:46:01.831263465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6956d4d-wxn6z,Uid:9d02daa2-baf6-4f84-868d-89d282edaf4a,Namespace:calico-system,Attempt:0,}" Oct 13 05:46:01.872778 containerd[1592]: time="2025-10-13T05:46:01.872551521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:01.873329 containerd[1592]: time="2025-10-13T05:46:01.873309675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:46:01.874691 containerd[1592]: time="2025-10-13T05:46:01.874644020Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:01.876724 containerd[1592]: time="2025-10-13T05:46:01.876693210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:01.877141 containerd[1592]: time="2025-10-13T05:46:01.877102398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.495187468s" Oct 13 05:46:01.877141 containerd[1592]: time="2025-10-13T05:46:01.877131843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:46:01.880770 containerd[1592]: time="2025-10-13T05:46:01.880555844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:46:01.885217 containerd[1592]: time="2025-10-13T05:46:01.885187584Z" level=info msg="CreateContainer within sandbox \"6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:46:01.898572 containerd[1592]: time="2025-10-13T05:46:01.898530854Z" level=info msg="Container 19d86f713940e899b0e2ab9de739e085accf8ee9de79121448c2329c2ce77ed6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:01.912526 containerd[1592]: time="2025-10-13T05:46:01.912443404Z" level=info msg="CreateContainer within sandbox \"6d65d73cda7244dcf97f3de0e7086b7f2bab5f260fd59284a3081ef732f27c12\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"19d86f713940e899b0e2ab9de739e085accf8ee9de79121448c2329c2ce77ed6\"" Oct 13 05:46:01.913850 containerd[1592]: time="2025-10-13T05:46:01.913808458Z" level=info msg="StartContainer for \"19d86f713940e899b0e2ab9de739e085accf8ee9de79121448c2329c2ce77ed6\"" Oct 13 05:46:01.917882 containerd[1592]: time="2025-10-13T05:46:01.917586233Z" level=info msg="connecting to shim 19d86f713940e899b0e2ab9de739e085accf8ee9de79121448c2329c2ce77ed6" address="unix:///run/containerd/s/8d57e58ac6eb09550edbb88bd7d4d9339380b49cc6d6b92fa772037e340e600a" protocol=ttrpc version=3 Oct 13 05:46:02.002611 systemd-networkd[1491]: cali8cc66e08086: Link UP Oct 13 05:46:02.004898 systemd-networkd[1491]: cali8cc66e08086: Gained carrier Oct 13 05:46:02.019915 systemd[1]: Started cri-containerd-19d86f713940e899b0e2ab9de739e085accf8ee9de79121448c2329c2ce77ed6.scope - libcontainer container 19d86f713940e899b0e2ab9de739e085accf8ee9de79121448c2329c2ce77ed6. Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.892 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0 calico-kube-controllers-5df6956d4d- calico-system 9d02daa2-baf6-4f84-868d-89d282edaf4a 829 0 2025-10-13 05:45:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5df6956d4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5df6956d4d-wxn6z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8cc66e08086 [] [] }} ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.892 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.943 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" HandleID="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Workload="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.943 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" HandleID="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Workload="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000428220), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5df6956d4d-wxn6z", "timestamp":"2025-10-13 05:46:01.943045169 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.944 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.944 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.944 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.951 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.956 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.960 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.970 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.974 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.975 [INFO][4560] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.978 [INFO][4560] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5 Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.987 [INFO][4560] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.991 [INFO][4560] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.992 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" host="localhost" Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.992 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:46:02.053036 containerd[1592]: 2025-10-13 05:46:01.993 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" HandleID="k8s-pod-network.7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Workload="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" Oct 13 05:46:02.053634 containerd[1592]: 2025-10-13 05:46:01.997 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0", GenerateName:"calico-kube-controllers-5df6956d4d-", Namespace:"calico-system", SelfLink:"", UID:"9d02daa2-baf6-4f84-868d-89d282edaf4a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df6956d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5df6956d4d-wxn6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8cc66e08086", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:02.053634 containerd[1592]: 2025-10-13 05:46:01.997 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" Oct 13 05:46:02.053634 containerd[1592]: 2025-10-13 05:46:01.997 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cc66e08086 ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" Oct 13 05:46:02.053634 containerd[1592]: 2025-10-13 05:46:02.012 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" Oct 13 05:46:02.053634 containerd[1592]: 2025-10-13 05:46:02.017 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0", GenerateName:"calico-kube-controllers-5df6956d4d-", Namespace:"calico-system", SelfLink:"", UID:"9d02daa2-baf6-4f84-868d-89d282edaf4a", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df6956d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5", Pod:"calico-kube-controllers-5df6956d4d-wxn6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8cc66e08086", MAC:"ce:f5:9a:ed:32:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:02.053634 containerd[1592]: 2025-10-13 05:46:02.038 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" Namespace="calico-system" Pod="calico-kube-controllers-5df6956d4d-wxn6z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5df6956d4d--wxn6z-eth0" Oct 13 05:46:02.323831 systemd-networkd[1491]: cali40506c83e23: Link UP Oct 13 05:46:02.324773 systemd-networkd[1491]: cali40506c83e23: Gained carrier Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:01.896 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--wc44b-eth0 coredns-66bc5c9577- kube-system 29c98089-d455-4b83-980b-4b84e28d91dd 831 0 2025-10-13 05:45:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-wc44b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali40506c83e23 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:01.896 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:01.971 [INFO][4562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" HandleID="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Workload="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:01.971 [INFO][4562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" HandleID="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Workload="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-wc44b", "timestamp":"2025-10-13 05:46:01.971120199 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:01.971 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:01.993 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:01.993 [INFO][4562] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.057 [INFO][4562] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.082 [INFO][4562] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.087 [INFO][4562] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.089 [INFO][4562] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.094 [INFO][4562] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.094 [INFO][4562] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.095 [INFO][4562] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738 Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.266 [INFO][4562] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.316 [INFO][4562] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.316 [INFO][4562] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" host="localhost" Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.316 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:46:02.537024 containerd[1592]: 2025-10-13 05:46:02.316 [INFO][4562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" HandleID="k8s-pod-network.9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Workload="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" Oct 13 05:46:02.537921 containerd[1592]: 2025-10-13 05:46:02.320 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wc44b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"29c98089-d455-4b83-980b-4b84e28d91dd", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-wc44b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40506c83e23", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:02.537921 containerd[1592]: 2025-10-13 05:46:02.320 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" Oct 13 05:46:02.537921 containerd[1592]: 2025-10-13 05:46:02.320 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40506c83e23 ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" Oct 13 05:46:02.537921 containerd[1592]: 2025-10-13 05:46:02.325 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" Oct 13 05:46:02.537921 containerd[1592]: 2025-10-13 05:46:02.325 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--wc44b-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"29c98089-d455-4b83-980b-4b84e28d91dd", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738", Pod:"coredns-66bc5c9577-wc44b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali40506c83e23", MAC:"2a:b5:0b:6b:f2:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:02.537921 containerd[1592]: 2025-10-13 05:46:02.529 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" Namespace="kube-system" Pod="coredns-66bc5c9577-wc44b" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--wc44b-eth0" Oct 13 05:46:02.604034 containerd[1592]: time="2025-10-13T05:46:02.603985193Z" level=info msg="StartContainer for \"19d86f713940e899b0e2ab9de739e085accf8ee9de79121448c2329c2ce77ed6\" returns successfully" Oct 13 05:46:02.706652 containerd[1592]: time="2025-10-13T05:46:02.706576556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llxgm,Uid:b3e721b8-9665-4f46-9b9b-bf2346733bde,Namespace:calico-system,Attempt:0,}" Oct 13 05:46:02.737358 containerd[1592]: time="2025-10-13T05:46:02.736575373Z" level=info msg="connecting to shim 7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5" address="unix:///run/containerd/s/d82d3a01b248d5e4c8b70f88859a7f849aa0c8adf4e244c13b4b08d852934d5d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:46:02.763696 containerd[1592]: time="2025-10-13T05:46:02.763487453Z" level=info msg="connecting to shim 9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738" address="unix:///run/containerd/s/d94b74fb6f00fc015f3444993bcedcad43d7895683092a5b81c113d75b21338e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:46:02.772658 containerd[1592]: time="2025-10-13T05:46:02.772601378Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:02.774090 containerd[1592]: time="2025-10-13T05:46:02.774059055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:46:02.776029 containerd[1592]: time="2025-10-13T05:46:02.775986716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 895.386318ms" Oct 13 05:46:02.777473 containerd[1592]: time="2025-10-13T05:46:02.777445244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:46:02.803982 systemd[1]: Started cri-containerd-7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5.scope - libcontainer container 7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5. Oct 13 05:46:02.809434 systemd[1]: Started cri-containerd-9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738.scope - libcontainer container 9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738. Oct 13 05:46:02.814071 containerd[1592]: time="2025-10-13T05:46:02.814020001Z" level=info msg="CreateContainer within sandbox \"bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:46:02.829100 containerd[1592]: time="2025-10-13T05:46:02.828367194Z" level=info msg="Container 0016729a10a60e1bd1c020ae2060a36feb27b878fc5f882fe8a56c3a5ca71fd2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:02.830829 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:46:02.843362 containerd[1592]: time="2025-10-13T05:46:02.843313774Z" level=info msg="CreateContainer within sandbox \"bc52baa8b4bfe4e9434743fd30d98bbe1eac9457cac019304b5e2ad3dcdbe329\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0016729a10a60e1bd1c020ae2060a36feb27b878fc5f882fe8a56c3a5ca71fd2\"" Oct 13 05:46:02.844257 containerd[1592]: time="2025-10-13T05:46:02.844222269Z" level=info msg="StartContainer for \"0016729a10a60e1bd1c020ae2060a36feb27b878fc5f882fe8a56c3a5ca71fd2\"" Oct 13 05:46:02.846352 containerd[1592]: time="2025-10-13T05:46:02.846311213Z" level=info msg="connecting to shim 0016729a10a60e1bd1c020ae2060a36feb27b878fc5f882fe8a56c3a5ca71fd2" address="unix:///run/containerd/s/d11441c30b23f40016f898707d6a6b37aeb0bc1d85a43d1fea5747b4b8d42ad8" protocol=ttrpc version=3 Oct 13 05:46:02.867144 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:46:02.884114 systemd[1]: Started cri-containerd-0016729a10a60e1bd1c020ae2060a36feb27b878fc5f882fe8a56c3a5ca71fd2.scope - libcontainer container 0016729a10a60e1bd1c020ae2060a36feb27b878fc5f882fe8a56c3a5ca71fd2. Oct 13 05:46:02.902944 containerd[1592]: time="2025-10-13T05:46:02.902839401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wc44b,Uid:29c98089-d455-4b83-980b-4b84e28d91dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738\"" Oct 13 05:46:02.912904 containerd[1592]: time="2025-10-13T05:46:02.912815415Z" level=info msg="CreateContainer within sandbox \"9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:46:02.935719 containerd[1592]: time="2025-10-13T05:46:02.935629800Z" level=info msg="Container 08367a32185f1adf2cc6cb6a676c9042088d9af65440d7c2ffc8bab1d9e11df4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:02.950971 containerd[1592]: time="2025-10-13T05:46:02.950912560Z" level=info msg="CreateContainer within sandbox \"9a867875c91b5ac867d894ed35a2601b1ea93a727c075af31fe0798f35f79738\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08367a32185f1adf2cc6cb6a676c9042088d9af65440d7c2ffc8bab1d9e11df4\"" Oct 13 05:46:02.951533 containerd[1592]: time="2025-10-13T05:46:02.951500775Z" level=info msg="StartContainer for \"08367a32185f1adf2cc6cb6a676c9042088d9af65440d7c2ffc8bab1d9e11df4\"" Oct 13 05:46:02.952507 containerd[1592]: time="2025-10-13T05:46:02.952309213Z" level=info msg="connecting to shim 08367a32185f1adf2cc6cb6a676c9042088d9af65440d7c2ffc8bab1d9e11df4" address="unix:///run/containerd/s/d94b74fb6f00fc015f3444993bcedcad43d7895683092a5b81c113d75b21338e" protocol=ttrpc version=3 Oct 13 05:46:02.961483 kubelet[2730]: I1013 05:46:02.961150 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5588688947-z4flx" podStartSLOduration=25.217622607 podStartE2EDuration="27.96113084s" podCreationTimestamp="2025-10-13 05:45:35 +0000 UTC" firstStartedPulling="2025-10-13 05:45:59.135279421 +0000 UTC m=+41.562887772" lastFinishedPulling="2025-10-13 05:46:01.878787644 +0000 UTC m=+44.306396005" observedRunningTime="2025-10-13 05:46:02.959563897 +0000 UTC m=+45.387172258" watchObservedRunningTime="2025-10-13 05:46:02.96113084 +0000 UTC m=+45.388739201" Oct 13 05:46:02.993061 systemd[1]: Started cri-containerd-08367a32185f1adf2cc6cb6a676c9042088d9af65440d7c2ffc8bab1d9e11df4.scope - libcontainer container 08367a32185f1adf2cc6cb6a676c9042088d9af65440d7c2ffc8bab1d9e11df4. Oct 13 05:46:03.013514 containerd[1592]: time="2025-10-13T05:46:03.013395188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df6956d4d-wxn6z,Uid:9d02daa2-baf6-4f84-868d-89d282edaf4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5\"" Oct 13 05:46:03.019142 containerd[1592]: time="2025-10-13T05:46:03.019064914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:46:03.049937 systemd-networkd[1491]: cali7b01b2bc7f6: Link UP Oct 13 05:46:03.053033 systemd-networkd[1491]: cali7b01b2bc7f6: Gained carrier Oct 13 05:46:03.062080 containerd[1592]: time="2025-10-13T05:46:03.061432211Z" level=info msg="StartContainer for \"08367a32185f1adf2cc6cb6a676c9042088d9af65440d7c2ffc8bab1d9e11df4\" returns successfully" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.813 [INFO][4649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--llxgm-eth0 csi-node-driver- calico-system b3e721b8-9665-4f46-9b9b-bf2346733bde 730 0 2025-10-13 05:45:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-llxgm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7b01b2bc7f6 [] [] }} ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.816 [INFO][4649] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-eth0" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.976 [INFO][4728] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" HandleID="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Workload="localhost-k8s-csi--node--driver--llxgm-eth0" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.977 [INFO][4728] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" HandleID="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Workload="localhost-k8s-csi--node--driver--llxgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005bb9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-llxgm", "timestamp":"2025-10-13 05:46:02.976939688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.978 [INFO][4728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.978 [INFO][4728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.978 [INFO][4728] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:02.990 [INFO][4728] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.002 [INFO][4728] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.014 [INFO][4728] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.016 [INFO][4728] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.019 [INFO][4728] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.019 [INFO][4728] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.021 [INFO][4728] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06 Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.029 [INFO][4728] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.036 [INFO][4728] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.036 [INFO][4728] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" host="localhost" Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.036 [INFO][4728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:46:03.092322 containerd[1592]: 2025-10-13 05:46:03.036 [INFO][4728] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" HandleID="k8s-pod-network.7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Workload="localhost-k8s-csi--node--driver--llxgm-eth0" Oct 13 05:46:03.094094 containerd[1592]: 2025-10-13 05:46:03.041 [INFO][4649] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--llxgm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3e721b8-9665-4f46-9b9b-bf2346733bde", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-llxgm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7b01b2bc7f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:03.094094 containerd[1592]: 2025-10-13 05:46:03.041 [INFO][4649] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-eth0" Oct 13 05:46:03.094094 containerd[1592]: 2025-10-13 05:46:03.041 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b01b2bc7f6 ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-eth0" Oct 13 05:46:03.094094 containerd[1592]: 2025-10-13 05:46:03.055 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-eth0" Oct 13 05:46:03.094094 containerd[1592]: 2025-10-13 05:46:03.056 [INFO][4649] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--llxgm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b3e721b8-9665-4f46-9b9b-bf2346733bde", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06", Pod:"csi-node-driver-llxgm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7b01b2bc7f6", MAC:"76:29:57:ec:28:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:03.094094 containerd[1592]: 2025-10-13 05:46:03.085 [INFO][4649] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" Namespace="calico-system" Pod="csi-node-driver-llxgm" WorkloadEndpoint="localhost-k8s-csi--node--driver--llxgm-eth0" Oct 13 05:46:03.127976 containerd[1592]: time="2025-10-13T05:46:03.125540732Z" level=info msg="StartContainer for \"0016729a10a60e1bd1c020ae2060a36feb27b878fc5f882fe8a56c3a5ca71fd2\" returns successfully" Oct 13 05:46:03.134739 containerd[1592]: time="2025-10-13T05:46:03.134648575Z" level=info msg="connecting to shim 7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06" address="unix:///run/containerd/s/14b71cc75a6403726b48c8e0e5e3ed559be54624690b44297bbd46fa739daa8b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:46:03.180427 systemd[1]: Started cri-containerd-7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06.scope - libcontainer container 7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06. Oct 13 05:46:03.192367 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:52104.service - OpenSSH per-connection server daemon (10.0.0.1:52104). Oct 13 05:46:03.219248 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:46:03.281689 containerd[1592]: time="2025-10-13T05:46:03.281637077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-llxgm,Uid:b3e721b8-9665-4f46-9b9b-bf2346733bde,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06\"" Oct 13 05:46:03.298955 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 52104 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:03.300990 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:03.306803 systemd-logind[1573]: New session 8 of user core. Oct 13 05:46:03.312923 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:46:03.483356 sshd[4880]: Connection closed by 10.0.0.1 port 52104 Oct 13 05:46:03.485471 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:03.491175 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:52104.service: Deactivated successfully. Oct 13 05:46:03.497086 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:46:03.501035 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:46:03.502650 systemd-logind[1573]: Removed session 8. Oct 13 05:46:03.705731 containerd[1592]: time="2025-10-13T05:46:03.705459612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hpt9p,Uid:d09be59c-7039-4e4e-8090-419990d9dff5,Namespace:calico-system,Attempt:0,}" Oct 13 05:46:03.722007 systemd-networkd[1491]: cali8cc66e08086: Gained IPv6LL Oct 13 05:46:03.885871 systemd-networkd[1491]: calida9b34239df: Link UP Oct 13 05:46:03.888309 systemd-networkd[1491]: calida9b34239df: Gained carrier Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.757 [INFO][4898] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--854f97d977--hpt9p-eth0 goldmane-854f97d977- calico-system d09be59c-7039-4e4e-8090-419990d9dff5 834 0 2025-10-13 05:45:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-854f97d977-hpt9p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calida9b34239df [] [] }} ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.757 [INFO][4898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.795 [INFO][4912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" HandleID="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Workload="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.795 [INFO][4912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" HandleID="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Workload="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122eb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-854f97d977-hpt9p", "timestamp":"2025-10-13 05:46:03.795172624 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.795 [INFO][4912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.795 [INFO][4912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.795 [INFO][4912] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.813 [INFO][4912] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.823 [INFO][4912] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.832 [INFO][4912] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.835 [INFO][4912] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.840 [INFO][4912] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.840 [INFO][4912] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.844 [INFO][4912] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.856 [INFO][4912] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.868 [INFO][4912] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.868 [INFO][4912] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" host="localhost" Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.868 [INFO][4912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:46:03.925596 containerd[1592]: 2025-10-13 05:46:03.868 [INFO][4912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" HandleID="k8s-pod-network.3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Workload="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" Oct 13 05:46:03.928314 containerd[1592]: 2025-10-13 05:46:03.880 [INFO][4898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--hpt9p-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"d09be59c-7039-4e4e-8090-419990d9dff5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-854f97d977-hpt9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida9b34239df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:03.928314 containerd[1592]: 2025-10-13 05:46:03.881 [INFO][4898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" Oct 13 05:46:03.928314 containerd[1592]: 2025-10-13 05:46:03.881 [INFO][4898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida9b34239df ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" Oct 13 05:46:03.928314 containerd[1592]: 2025-10-13 05:46:03.891 [INFO][4898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" Oct 13 05:46:03.928314 containerd[1592]: 2025-10-13 05:46:03.893 [INFO][4898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--hpt9p-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"d09be59c-7039-4e4e-8090-419990d9dff5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 45, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f", Pod:"goldmane-854f97d977-hpt9p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida9b34239df", MAC:"06:75:a6:11:c9:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:46:03.928314 containerd[1592]: 2025-10-13 05:46:03.918 [INFO][4898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" Namespace="calico-system" Pod="goldmane-854f97d977-hpt9p" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hpt9p-eth0" Oct 13 05:46:03.972674 kubelet[2730]: I1013 05:46:03.972620 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:46:03.992674 containerd[1592]: time="2025-10-13T05:46:03.992601011Z" level=info msg="connecting to shim 3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f" address="unix:///run/containerd/s/becec88eafb8ecae437504ece1c47f56f38f7585caa8883be4f1efc932d667dc" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:46:03.996561 kubelet[2730]: I1013 05:46:03.996403 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5588688947-xbkn2" podStartSLOduration=25.439615408 podStartE2EDuration="28.996380557s" podCreationTimestamp="2025-10-13 05:45:35 +0000 UTC" firstStartedPulling="2025-10-13 05:45:59.221723906 +0000 UTC m=+41.649332267" lastFinishedPulling="2025-10-13 05:46:02.778489065 +0000 UTC m=+45.206097416" observedRunningTime="2025-10-13 05:46:03.976880351 +0000 UTC m=+46.404488712" watchObservedRunningTime="2025-10-13 05:46:03.996380557 +0000 UTC m=+46.423988918" Oct 13 05:46:04.040537 systemd-networkd[1491]: cali40506c83e23: Gained IPv6LL Oct 13 05:46:04.043076 systemd[1]: Started cri-containerd-3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f.scope - libcontainer container 3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f. Oct 13 05:46:04.075023 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:46:04.115403 containerd[1592]: time="2025-10-13T05:46:04.115317885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hpt9p,Uid:d09be59c-7039-4e4e-8090-419990d9dff5,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f\"" Oct 13 05:46:04.680011 systemd-networkd[1491]: cali7b01b2bc7f6: Gained IPv6LL Oct 13 05:46:04.977032 kubelet[2730]: I1013 05:46:04.976875 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:46:05.256619 systemd-networkd[1491]: calida9b34239df: Gained IPv6LL Oct 13 05:46:08.134803 containerd[1592]: time="2025-10-13T05:46:08.134601660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:08.190800 containerd[1592]: time="2025-10-13T05:46:08.190728610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:46:08.227030 containerd[1592]: time="2025-10-13T05:46:08.226963103Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:08.230255 containerd[1592]: time="2025-10-13T05:46:08.230185562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:08.230645 containerd[1592]: time="2025-10-13T05:46:08.230611110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.211495721s" Oct 13 05:46:08.230705 containerd[1592]: time="2025-10-13T05:46:08.230645504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:46:08.232225 containerd[1592]: time="2025-10-13T05:46:08.232192409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:46:08.542355 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:52114.service - OpenSSH per-connection server daemon (10.0.0.1:52114). Oct 13 05:46:08.631032 containerd[1592]: time="2025-10-13T05:46:08.630962064Z" level=info msg="CreateContainer within sandbox \"7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:46:08.652773 containerd[1592]: time="2025-10-13T05:46:08.649984209Z" level=info msg="Container 906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:08.675739 containerd[1592]: time="2025-10-13T05:46:08.675671655Z" level=info msg="CreateContainer within sandbox \"7f00f10c1e5c7104631faceae486d0e04e34611e23eb517c69b9d2e7ad4dfaf5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0\"" Oct 13 05:46:08.680190 containerd[1592]: time="2025-10-13T05:46:08.680144250Z" level=info msg="StartContainer for \"906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0\"" Oct 13 05:46:08.688384 containerd[1592]: time="2025-10-13T05:46:08.688324814Z" level=info msg="connecting to shim 906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0" address="unix:///run/containerd/s/d82d3a01b248d5e4c8b70f88859a7f849aa0c8adf4e244c13b4b08d852934d5d" protocol=ttrpc version=3 Oct 13 05:46:08.720234 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 52114 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:08.726191 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:08.741689 systemd-logind[1573]: New session 9 of user core. Oct 13 05:46:08.748129 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:46:08.771947 systemd[1]: Started cri-containerd-906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0.scope - libcontainer container 906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0. Oct 13 05:46:08.838096 containerd[1592]: time="2025-10-13T05:46:08.837973517Z" level=info msg="StartContainer for \"906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0\" returns successfully" Oct 13 05:46:08.917342 sshd[5018]: Connection closed by 10.0.0.1 port 52114 Oct 13 05:46:08.919317 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:08.924906 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:52114.service: Deactivated successfully. Oct 13 05:46:08.927936 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:46:08.929730 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:46:08.931098 systemd-logind[1573]: Removed session 9. Oct 13 05:46:09.015035 kubelet[2730]: I1013 05:46:09.014780 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wc44b" podStartSLOduration=44.014743405 podStartE2EDuration="44.014743405s" podCreationTimestamp="2025-10-13 05:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:46:03.997171262 +0000 UTC m=+46.424779623" watchObservedRunningTime="2025-10-13 05:46:09.014743405 +0000 UTC m=+51.442351766" Oct 13 05:46:09.015607 kubelet[2730]: I1013 05:46:09.015524 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5df6956d4d-wxn6z" podStartSLOduration=25.80102961 podStartE2EDuration="31.015516667s" podCreationTimestamp="2025-10-13 05:45:38 +0000 UTC" firstStartedPulling="2025-10-13 05:46:03.017567733 +0000 UTC m=+45.445176094" lastFinishedPulling="2025-10-13 05:46:08.23205479 +0000 UTC m=+50.659663151" observedRunningTime="2025-10-13 05:46:09.015373748 +0000 UTC m=+51.442982099" watchObservedRunningTime="2025-10-13 05:46:09.015516667 +0000 UTC m=+51.443125028" Oct 13 05:46:09.080531 containerd[1592]: time="2025-10-13T05:46:09.080474614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0\" id:\"886a2020cb6f6267b277aa2be1afab7a503defc6628f5261c0c52a327ac4cf12\" pid:5079 exited_at:{seconds:1760334369 nanos:80046131}" Oct 13 05:46:09.766436 containerd[1592]: time="2025-10-13T05:46:09.766368400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:09.767766 containerd[1592]: time="2025-10-13T05:46:09.767496206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:46:09.769556 containerd[1592]: time="2025-10-13T05:46:09.769111759Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:09.771765 containerd[1592]: time="2025-10-13T05:46:09.771686371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:09.772388 containerd[1592]: time="2025-10-13T05:46:09.772357210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.540127791s" Oct 13 05:46:09.772429 containerd[1592]: time="2025-10-13T05:46:09.772394730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:46:09.773979 containerd[1592]: time="2025-10-13T05:46:09.773694560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:46:09.779325 containerd[1592]: time="2025-10-13T05:46:09.779267019Z" level=info msg="CreateContainer within sandbox \"7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:46:09.799799 containerd[1592]: time="2025-10-13T05:46:09.797047741Z" level=info msg="Container 04f467fd014730d6d98a75fc3995cbce9dc979dddfff908f5892c15f84241d4f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:09.805212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445951271.mount: Deactivated successfully. Oct 13 05:46:09.834017 containerd[1592]: time="2025-10-13T05:46:09.833957858Z" level=info msg="CreateContainer within sandbox \"7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"04f467fd014730d6d98a75fc3995cbce9dc979dddfff908f5892c15f84241d4f\"" Oct 13 05:46:09.835092 containerd[1592]: time="2025-10-13T05:46:09.835025431Z" level=info msg="StartContainer for \"04f467fd014730d6d98a75fc3995cbce9dc979dddfff908f5892c15f84241d4f\"" Oct 13 05:46:09.837263 containerd[1592]: time="2025-10-13T05:46:09.837232674Z" level=info msg="connecting to shim 04f467fd014730d6d98a75fc3995cbce9dc979dddfff908f5892c15f84241d4f" address="unix:///run/containerd/s/14b71cc75a6403726b48c8e0e5e3ed559be54624690b44297bbd46fa739daa8b" protocol=ttrpc version=3 Oct 13 05:46:09.875051 systemd[1]: Started cri-containerd-04f467fd014730d6d98a75fc3995cbce9dc979dddfff908f5892c15f84241d4f.scope - libcontainer container 04f467fd014730d6d98a75fc3995cbce9dc979dddfff908f5892c15f84241d4f. Oct 13 05:46:09.940784 containerd[1592]: time="2025-10-13T05:46:09.939895510Z" level=info msg="StartContainer for \"04f467fd014730d6d98a75fc3995cbce9dc979dddfff908f5892c15f84241d4f\" returns successfully" Oct 13 05:46:11.838025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57971456.mount: Deactivated successfully. Oct 13 05:46:12.862200 containerd[1592]: time="2025-10-13T05:46:12.862140241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:12.862839 containerd[1592]: time="2025-10-13T05:46:12.862809898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:46:12.864086 containerd[1592]: time="2025-10-13T05:46:12.864044835Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:12.866250 containerd[1592]: time="2025-10-13T05:46:12.866204568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:12.867153 containerd[1592]: time="2025-10-13T05:46:12.867114396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.093391002s" Oct 13 05:46:12.867153 containerd[1592]: time="2025-10-13T05:46:12.867145504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:46:12.868298 containerd[1592]: time="2025-10-13T05:46:12.868232052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:46:12.872982 containerd[1592]: time="2025-10-13T05:46:12.872941761Z" level=info msg="CreateContainer within sandbox \"3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:46:12.881973 containerd[1592]: time="2025-10-13T05:46:12.881925209Z" level=info msg="Container 5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:12.892778 containerd[1592]: time="2025-10-13T05:46:12.892710919Z" level=info msg="CreateContainer within sandbox \"3a32341318c7e85a2cd6626c32a73f6e346b7f2f36ba3606c7c0dc9d5998856f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3\"" Oct 13 05:46:12.893976 containerd[1592]: time="2025-10-13T05:46:12.893929405Z" level=info msg="StartContainer for \"5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3\"" Oct 13 05:46:12.895569 containerd[1592]: time="2025-10-13T05:46:12.895544757Z" level=info msg="connecting to shim 5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3" address="unix:///run/containerd/s/becec88eafb8ecae437504ece1c47f56f38f7585caa8883be4f1efc932d667dc" protocol=ttrpc version=3 Oct 13 05:46:12.918916 systemd[1]: Started cri-containerd-5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3.scope - libcontainer container 5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3. Oct 13 05:46:12.970505 containerd[1592]: time="2025-10-13T05:46:12.970445342Z" level=info msg="StartContainer for \"5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3\" returns successfully" Oct 13 05:46:13.032793 kubelet[2730]: I1013 05:46:13.032609 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-hpt9p" podStartSLOduration=27.281813232 podStartE2EDuration="36.032593002s" podCreationTimestamp="2025-10-13 05:45:37 +0000 UTC" firstStartedPulling="2025-10-13 05:46:04.117372854 +0000 UTC m=+46.544981215" lastFinishedPulling="2025-10-13 05:46:12.868152624 +0000 UTC m=+55.295760985" observedRunningTime="2025-10-13 05:46:13.032104786 +0000 UTC m=+55.459713147" watchObservedRunningTime="2025-10-13 05:46:13.032593002 +0000 UTC m=+55.460201353" Oct 13 05:46:13.166852 containerd[1592]: time="2025-10-13T05:46:13.165859912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3\" id:\"0963f0780ab0ca6d3950cf452578f60ad5735110cc40b9cbc0976af5ec24ab92\" pid:5184 exit_status:1 exited_at:{seconds:1760334373 nanos:165194523}" Oct 13 05:46:13.932140 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:45656.service - OpenSSH per-connection server daemon (10.0.0.1:45656). Oct 13 05:46:14.007201 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 45656 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:14.044872 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:14.051858 systemd-logind[1573]: New session 10 of user core. Oct 13 05:46:14.058897 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:46:14.115339 containerd[1592]: time="2025-10-13T05:46:14.115289205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3\" id:\"b753b8b69c9156a5580cc7f175c759de8659c189db4092d77139ed49a3e4e8fe\" pid:5215 exit_status:1 exited_at:{seconds:1760334374 nanos:114926665}" Oct 13 05:46:14.283852 sshd[5221]: Connection closed by 10.0.0.1 port 45656 Oct 13 05:46:14.284903 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:14.288294 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:46:14.290529 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:45656.service: Deactivated successfully. Oct 13 05:46:14.293528 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:46:14.297339 systemd-logind[1573]: Removed session 10. Oct 13 05:46:15.049832 kubelet[2730]: I1013 05:46:15.049766 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:46:16.542107 containerd[1592]: time="2025-10-13T05:46:16.542018833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:16.543459 containerd[1592]: time="2025-10-13T05:46:16.543408281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:46:16.545000 containerd[1592]: time="2025-10-13T05:46:16.544936007Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:16.547193 containerd[1592]: time="2025-10-13T05:46:16.547140233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:46:16.547798 containerd[1592]: time="2025-10-13T05:46:16.547734227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.679472669s" Oct 13 05:46:16.547869 containerd[1592]: time="2025-10-13T05:46:16.547799379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:46:16.591333 containerd[1592]: time="2025-10-13T05:46:16.591284296Z" level=info msg="CreateContainer within sandbox \"7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:46:16.611777 containerd[1592]: time="2025-10-13T05:46:16.608726211Z" level=info msg="Container 439470e1dd7c0262759607906df9145125a5e7cdcc0c75ad2a5449d0cc446b3e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:46:16.660277 containerd[1592]: time="2025-10-13T05:46:16.660230212Z" level=info msg="CreateContainer within sandbox \"7d69b9bb0836e17d07a9479be2530a6e496d2c7d8c734d4d7b7fbf3ee3fe5f06\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"439470e1dd7c0262759607906df9145125a5e7cdcc0c75ad2a5449d0cc446b3e\"" Oct 13 05:46:16.662183 containerd[1592]: time="2025-10-13T05:46:16.662155805Z" level=info msg="StartContainer for \"439470e1dd7c0262759607906df9145125a5e7cdcc0c75ad2a5449d0cc446b3e\"" Oct 13 05:46:16.664797 containerd[1592]: time="2025-10-13T05:46:16.664739462Z" level=info msg="connecting to shim 439470e1dd7c0262759607906df9145125a5e7cdcc0c75ad2a5449d0cc446b3e" address="unix:///run/containerd/s/14b71cc75a6403726b48c8e0e5e3ed559be54624690b44297bbd46fa739daa8b" protocol=ttrpc version=3 Oct 13 05:46:16.705924 systemd[1]: Started cri-containerd-439470e1dd7c0262759607906df9145125a5e7cdcc0c75ad2a5449d0cc446b3e.scope - libcontainer container 439470e1dd7c0262759607906df9145125a5e7cdcc0c75ad2a5449d0cc446b3e. Oct 13 05:46:16.770185 containerd[1592]: time="2025-10-13T05:46:16.770130627Z" level=info msg="StartContainer for \"439470e1dd7c0262759607906df9145125a5e7cdcc0c75ad2a5449d0cc446b3e\" returns successfully" Oct 13 05:46:16.783111 kubelet[2730]: I1013 05:46:16.783069 2730 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:46:16.783111 kubelet[2730]: I1013 05:46:16.783109 2730 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:46:17.055658 kubelet[2730]: I1013 05:46:17.055584 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-llxgm" podStartSLOduration=25.791409001 podStartE2EDuration="39.055550448s" podCreationTimestamp="2025-10-13 05:45:38 +0000 UTC" firstStartedPulling="2025-10-13 05:46:03.284328641 +0000 UTC m=+45.711937002" lastFinishedPulling="2025-10-13 05:46:16.548470087 +0000 UTC m=+58.976078449" observedRunningTime="2025-10-13 05:46:17.055174924 +0000 UTC m=+59.482783305" watchObservedRunningTime="2025-10-13 05:46:17.055550448 +0000 UTC m=+59.483158839" Oct 13 05:46:19.296423 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:45660.service - OpenSSH per-connection server daemon (10.0.0.1:45660). Oct 13 05:46:19.374428 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 45660 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:19.376161 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:19.380298 systemd-logind[1573]: New session 11 of user core. Oct 13 05:46:19.387947 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:46:19.643179 sshd[5293]: Connection closed by 10.0.0.1 port 45660 Oct 13 05:46:19.644473 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:19.654968 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:45660.service: Deactivated successfully. Oct 13 05:46:19.657187 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:46:19.658206 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:46:19.661373 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:45664.service - OpenSSH per-connection server daemon (10.0.0.1:45664). Oct 13 05:46:19.662468 systemd-logind[1573]: Removed session 11. Oct 13 05:46:19.711147 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 45664 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:19.712726 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:19.717372 systemd-logind[1573]: New session 12 of user core. Oct 13 05:46:19.723969 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:46:19.985863 sshd[5311]: Connection closed by 10.0.0.1 port 45664 Oct 13 05:46:19.984422 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:19.997547 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:45664.service: Deactivated successfully. Oct 13 05:46:20.001944 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:46:20.005823 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:46:20.013652 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:45678.service - OpenSSH per-connection server daemon (10.0.0.1:45678). Oct 13 05:46:20.015807 systemd-logind[1573]: Removed session 12. Oct 13 05:46:20.062835 sshd[5322]: Accepted publickey for core from 10.0.0.1 port 45678 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:20.064660 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:20.069421 systemd-logind[1573]: New session 13 of user core. Oct 13 05:46:20.082021 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:46:20.351559 sshd[5325]: Connection closed by 10.0.0.1 port 45678 Oct 13 05:46:20.351984 sshd-session[5322]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:20.356304 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:45678.service: Deactivated successfully. Oct 13 05:46:20.358256 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:46:20.359134 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:46:20.360356 systemd-logind[1573]: Removed session 13. Oct 13 05:46:25.369407 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:50250.service - OpenSSH per-connection server daemon (10.0.0.1:50250). Oct 13 05:46:25.432359 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 50250 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:25.434455 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:25.439285 systemd-logind[1573]: New session 14 of user core. Oct 13 05:46:25.443151 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:46:25.599482 sshd[5349]: Connection closed by 10.0.0.1 port 50250 Oct 13 05:46:25.600780 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:25.606452 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:50250.service: Deactivated successfully. Oct 13 05:46:25.608784 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:46:25.609627 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:46:25.611289 systemd-logind[1573]: Removed session 14. Oct 13 05:46:29.014202 containerd[1592]: time="2025-10-13T05:46:29.014144565Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89\" id:\"775d19eee534863c0eddcdc01c191002b527947887e89bd5d2f0c179aeeaf8a3\" pid:5377 exit_status:1 exited_at:{seconds:1760334389 nanos:13783888}" Oct 13 05:46:29.110670 kubelet[2730]: I1013 05:46:29.110629 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:46:30.612097 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:50264.service - OpenSSH per-connection server daemon (10.0.0.1:50264). Oct 13 05:46:30.695040 sshd[5394]: Accepted publickey for core from 10.0.0.1 port 50264 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:30.696734 sshd-session[5394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:30.701527 systemd-logind[1573]: New session 15 of user core. Oct 13 05:46:30.707908 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:46:30.942294 sshd[5397]: Connection closed by 10.0.0.1 port 50264 Oct 13 05:46:30.943108 sshd-session[5394]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:30.947735 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:46:30.949613 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:50264.service: Deactivated successfully. Oct 13 05:46:30.952881 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:46:30.955054 systemd-logind[1573]: Removed session 15. Oct 13 05:46:35.794107 containerd[1592]: time="2025-10-13T05:46:35.794043570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0\" id:\"01249e093c308180cdea17630ac5a45e71d9e4e21160d2e5fd3dfd3506634d1d\" pid:5422 exited_at:{seconds:1760334395 nanos:793657457}" Oct 13 05:46:35.954024 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:40184.service - OpenSSH per-connection server daemon (10.0.0.1:40184). Oct 13 05:46:36.007056 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 40184 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:36.008802 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:36.013276 systemd-logind[1573]: New session 16 of user core. Oct 13 05:46:36.019974 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:46:36.207932 sshd[5436]: Connection closed by 10.0.0.1 port 40184 Oct 13 05:46:36.208298 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:36.213580 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:40184.service: Deactivated successfully. Oct 13 05:46:36.216275 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:46:36.217998 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:46:36.220387 systemd-logind[1573]: Removed session 16. Oct 13 05:46:36.612256 containerd[1592]: time="2025-10-13T05:46:36.612197458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3\" id:\"d04486d15706312c66f2d9d522c5b8e6e30b89dd4e4bce95d3a98b83df539c3f\" pid:5461 exited_at:{seconds:1760334396 nanos:611780386}" Oct 13 05:46:39.049309 containerd[1592]: time="2025-10-13T05:46:39.049252922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"906fd0160c381536abeceaa26282d34d373db391bfd491d0316457a358a569c0\" id:\"8e20ae11d535305f9a0ccfb0c7d73fbafa00521e70f90448178dde79ae86e164\" pid:5493 exited_at:{seconds:1760334399 nanos:48906698}" Oct 13 05:46:41.224257 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:40190.service - OpenSSH per-connection server daemon (10.0.0.1:40190). Oct 13 05:46:41.274835 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 40190 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:41.276086 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:41.280371 systemd-logind[1573]: New session 17 of user core. Oct 13 05:46:41.294906 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:46:41.563980 sshd[5507]: Connection closed by 10.0.0.1 port 40190 Oct 13 05:46:41.564570 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:41.569018 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:46:41.570198 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:40190.service: Deactivated successfully. Oct 13 05:46:41.572994 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:46:41.576207 systemd-logind[1573]: Removed session 17. Oct 13 05:46:44.119972 containerd[1592]: time="2025-10-13T05:46:44.119914907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a9cfba4d97d38340864df4a39af68b212f4a9e0e7d5b4f2f070d8c043ffe1d3\" id:\"19faa474dfb00ff904c7f5a810ebde6ff670f426f4ed68816dbd460fe14f7581\" pid:5533 exited_at:{seconds:1760334404 nanos:119477611}" Oct 13 05:46:46.583972 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:37268.service - OpenSSH per-connection server daemon (10.0.0.1:37268). Oct 13 05:46:46.659050 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 37268 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:46.660972 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:46.666300 systemd-logind[1573]: New session 18 of user core. Oct 13 05:46:46.681655 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:46:46.807276 sshd[5550]: Connection closed by 10.0.0.1 port 37268 Oct 13 05:46:46.808997 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:46.820100 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:37268.service: Deactivated successfully. Oct 13 05:46:46.824156 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:46:46.826003 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:46:46.828342 systemd-logind[1573]: Removed session 18. Oct 13 05:46:46.830034 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:37278.service - OpenSSH per-connection server daemon (10.0.0.1:37278). Oct 13 05:46:46.878420 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 37278 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:46.879810 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:46.885823 systemd-logind[1573]: New session 19 of user core. Oct 13 05:46:46.894967 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:46:47.328506 sshd[5566]: Connection closed by 10.0.0.1 port 37278 Oct 13 05:46:47.331150 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:47.338944 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:37278.service: Deactivated successfully. Oct 13 05:46:47.341042 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:46:47.342000 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:46:47.346310 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:37286.service - OpenSSH per-connection server daemon (10.0.0.1:37286). Oct 13 05:46:47.347952 systemd-logind[1573]: Removed session 19. Oct 13 05:46:47.417195 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 37286 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:47.419149 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:47.424833 systemd-logind[1573]: New session 20 of user core. Oct 13 05:46:47.432074 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:46:48.143110 sshd[5581]: Connection closed by 10.0.0.1 port 37286 Oct 13 05:46:48.143526 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:48.153428 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:37286.service: Deactivated successfully. Oct 13 05:46:48.156023 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:46:48.157953 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:46:48.167032 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:37300.service - OpenSSH per-connection server daemon (10.0.0.1:37300). Oct 13 05:46:48.167786 systemd-logind[1573]: Removed session 20. Oct 13 05:46:48.228308 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 37300 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:48.230181 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:48.237323 systemd-logind[1573]: New session 21 of user core. Oct 13 05:46:48.245963 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:46:48.966764 sshd[5605]: Connection closed by 10.0.0.1 port 37300 Oct 13 05:46:48.967196 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:48.983000 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:37300.service: Deactivated successfully. Oct 13 05:46:48.985133 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:46:48.985992 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:46:48.989213 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:37310.service - OpenSSH per-connection server daemon (10.0.0.1:37310). Oct 13 05:46:48.991226 systemd-logind[1573]: Removed session 21. Oct 13 05:46:49.037369 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 37310 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:49.039456 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:49.045100 systemd-logind[1573]: New session 22 of user core. Oct 13 05:46:49.052149 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:46:49.206424 sshd[5620]: Connection closed by 10.0.0.1 port 37310 Oct 13 05:46:49.207075 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:49.212290 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:37310.service: Deactivated successfully. Oct 13 05:46:49.214410 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:46:49.215144 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:46:49.216271 systemd-logind[1573]: Removed session 22. Oct 13 05:46:54.221253 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:49826.service - OpenSSH per-connection server daemon (10.0.0.1:49826). Oct 13 05:46:54.310673 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 49826 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:54.313022 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:54.318303 systemd-logind[1573]: New session 23 of user core. Oct 13 05:46:54.327911 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:46:54.500966 sshd[5638]: Connection closed by 10.0.0.1 port 49826 Oct 13 05:46:54.501799 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:54.514375 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:49826.service: Deactivated successfully. Oct 13 05:46:54.514683 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:46:54.518900 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:46:54.523734 systemd-logind[1573]: Removed session 23. Oct 13 05:46:58.852523 containerd[1592]: time="2025-10-13T05:46:58.852471635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd4b1e55e6f1ae7a2142d9a5939236ebe2ba3b0f822e06d1bd91a61e2c0a3e89\" id:\"b67b12255367b271c5258eac8aa93593d2a9a96d8d06d467fc4a3b7d3bc010a9\" pid:5666 exited_at:{seconds:1760334418 nanos:852093815}" Oct 13 05:46:59.515590 systemd[1]: Started sshd@23-10.0.0.69:22-10.0.0.1:49838.service - OpenSSH per-connection server daemon (10.0.0.1:49838). Oct 13 05:46:59.571690 sshd[5680]: Accepted publickey for core from 10.0.0.1 port 49838 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:46:59.573392 sshd-session[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:46:59.577718 systemd-logind[1573]: New session 24 of user core. Oct 13 05:46:59.590007 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:46:59.758055 sshd[5683]: Connection closed by 10.0.0.1 port 49838 Oct 13 05:46:59.758449 sshd-session[5680]: pam_unix(sshd:session): session closed for user core Oct 13 05:46:59.762688 systemd[1]: sshd@23-10.0.0.69:22-10.0.0.1:49838.service: Deactivated successfully. Oct 13 05:46:59.765394 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:46:59.770258 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:46:59.772330 systemd-logind[1573]: Removed session 24. Oct 13 05:47:02.700492 kubelet[2730]: E1013 05:47:02.700441 2730 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:47:04.776719 systemd[1]: Started sshd@24-10.0.0.69:22-10.0.0.1:40964.service - OpenSSH per-connection server daemon (10.0.0.1:40964). Oct 13 05:47:04.846511 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 40964 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:47:04.848730 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:47:04.853511 systemd-logind[1573]: New session 25 of user core. Oct 13 05:47:04.856865 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:47:04.980060 sshd[5705]: Connection closed by 10.0.0.1 port 40964 Oct 13 05:47:04.980455 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Oct 13 05:47:04.985795 systemd[1]: sshd@24-10.0.0.69:22-10.0.0.1:40964.service: Deactivated successfully. Oct 13 05:47:04.988809 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:47:04.989742 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:47:04.991829 systemd-logind[1573]: Removed session 25.