Oct 13 05:55:24.840103 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:37:12 -00 2025 Oct 13 05:55:24.840138 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:55:24.840150 kernel: BIOS-provided physical RAM map: Oct 13 05:55:24.840156 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:55:24.840163 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 13 05:55:24.840169 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 13 05:55:24.840176 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 13 05:55:24.840183 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 13 05:55:24.840190 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 13 05:55:24.840205 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 13 05:55:24.840213 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 13 05:55:24.840219 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 13 05:55:24.840225 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 13 05:55:24.840232 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 13 05:55:24.840239 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 13 05:55:24.840249 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 13 05:55:24.840255 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 13 05:55:24.840262 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 13 05:55:24.840269 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 13 05:55:24.840275 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 13 05:55:24.840282 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 13 05:55:24.840288 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 13 05:55:24.840295 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:55:24.840301 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:55:24.840308 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 13 05:55:24.840317 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:55:24.840323 kernel: NX (Execute Disable) protection: active Oct 13 05:55:24.840330 kernel: APIC: Static calls initialized Oct 13 05:55:24.840337 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Oct 13 05:55:24.840344 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Oct 13 05:55:24.840350 kernel: extended physical RAM map: Oct 13 05:55:24.840363 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 13 05:55:24.840370 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 13 05:55:24.840377 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 13 05:55:24.840384 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 13 05:55:24.840391 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 13 05:55:24.840399 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 13 05:55:24.840406 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 13 05:55:24.840413 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Oct 13 05:55:24.840420 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Oct 13 05:55:24.840429 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Oct 13 05:55:24.840436 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Oct 13 05:55:24.840446 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Oct 13 05:55:24.840453 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 13 05:55:24.840460 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 13 05:55:24.840467 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 13 05:55:24.840473 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 13 05:55:24.840480 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 13 05:55:24.840487 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 13 05:55:24.840494 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 13 05:55:24.840501 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 13 05:55:24.840508 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 13 05:55:24.840517 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 13 05:55:24.840524 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 13 05:55:24.840531 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:55:24.840538 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:55:24.840545 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 13 05:55:24.840556 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:55:24.840563 kernel: efi: EFI v2.7 by EDK II Oct 13 05:55:24.840581 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 13 05:55:24.840594 kernel: random: crng init done Oct 13 05:55:24.840605 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 13 05:55:24.840612 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 13 05:55:24.840625 kernel: secureboot: Secure boot disabled Oct 13 05:55:24.840632 kernel: SMBIOS 2.8 present. Oct 13 05:55:24.840639 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 13 05:55:24.840646 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:55:24.840653 kernel: Hypervisor detected: KVM Oct 13 05:55:24.840660 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:55:24.840667 kernel: kvm-clock: using sched offset of 3974981115 cycles Oct 13 05:55:24.840674 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:55:24.840682 kernel: tsc: Detected 2794.750 MHz processor Oct 13 05:55:24.840689 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:55:24.840696 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:55:24.840706 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 13 05:55:24.840713 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:55:24.840720 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:55:24.840728 kernel: Using GB pages for direct mapping Oct 13 05:55:24.840735 kernel: ACPI: Early table checksum verification disabled Oct 13 05:55:24.840742 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 13 05:55:24.840749 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 13 05:55:24.840756 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:24.840764 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:24.840773 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 13 05:55:24.840780 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:24.840787 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:24.840794 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:24.840802 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:24.840809 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 13 05:55:24.840816 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 13 05:55:24.840823 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 13 05:55:24.840830 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 13 05:55:24.840839 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 13 05:55:24.840846 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 13 05:55:24.840854 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 13 05:55:24.840861 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 13 05:55:24.840867 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 13 05:55:24.840874 kernel: No NUMA configuration found Oct 13 05:55:24.840882 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 13 05:55:24.840889 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 13 05:55:24.840896 kernel: Zone ranges: Oct 13 05:55:24.840905 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:55:24.840912 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 13 05:55:24.840919 kernel: Normal empty Oct 13 05:55:24.840926 kernel: Device empty Oct 13 05:55:24.840933 kernel: Movable zone start for each node Oct 13 05:55:24.840940 kernel: Early memory node ranges Oct 13 05:55:24.840947 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 13 05:55:24.840954 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 13 05:55:24.840961 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 13 05:55:24.840968 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 13 05:55:24.840978 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 13 05:55:24.840985 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 13 05:55:24.840992 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 13 05:55:24.840999 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 13 05:55:24.841006 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 13 05:55:24.841013 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:55:24.841020 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 13 05:55:24.841038 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 13 05:55:24.841045 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:55:24.841052 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 13 05:55:24.841060 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 13 05:55:24.841067 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 13 05:55:24.841076 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 13 05:55:24.841084 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 13 05:55:24.841091 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:55:24.841099 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:55:24.841106 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:55:24.841116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:55:24.841134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:55:24.841142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:55:24.841149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:55:24.841157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:55:24.841164 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:55:24.841171 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:55:24.841179 kernel: TSC deadline timer available Oct 13 05:55:24.841186 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:55:24.841196 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:55:24.841203 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:55:24.841210 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:55:24.841218 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:55:24.841225 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:55:24.841232 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:55:24.841239 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:55:24.841247 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:55:24.841254 kernel: kvm-guest: setup PV sched yield Oct 13 05:55:24.841264 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 13 05:55:24.841271 kernel: Booting paravirtualized kernel on KVM Oct 13 05:55:24.841279 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:55:24.841286 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:55:24.841294 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:55:24.841301 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:55:24.841308 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:55:24.841316 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:55:24.841323 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:55:24.841334 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:55:24.841342 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:55:24.841350 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:55:24.841363 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:55:24.841371 kernel: Fallback order for Node 0: 0 Oct 13 05:55:24.841378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 13 05:55:24.841385 kernel: Policy zone: DMA32 Oct 13 05:55:24.841393 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:55:24.841403 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:55:24.841411 kernel: ftrace: allocating 40139 entries in 157 pages Oct 13 05:55:24.841418 kernel: ftrace: allocated 157 pages with 5 groups Oct 13 05:55:24.841425 kernel: Dynamic Preempt: voluntary Oct 13 05:55:24.841433 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:55:24.841441 kernel: rcu: RCU event tracing is enabled. Oct 13 05:55:24.841448 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:55:24.841456 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:55:24.841463 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:55:24.841470 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:55:24.841481 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:55:24.841488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:55:24.841496 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:55:24.841503 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:55:24.841511 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:55:24.841518 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:55:24.841526 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:55:24.841533 kernel: Console: colour dummy device 80x25 Oct 13 05:55:24.841540 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:55:24.841550 kernel: ACPI: Core revision 20240827 Oct 13 05:55:24.841558 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:55:24.841566 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:55:24.841573 kernel: x2apic enabled Oct 13 05:55:24.841580 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:55:24.841588 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:55:24.841595 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:55:24.841602 kernel: kvm-guest: setup PV IPIs Oct 13 05:55:24.841610 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:55:24.841620 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:55:24.841628 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 13 05:55:24.841635 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:55:24.841642 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:55:24.841650 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:55:24.841657 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:55:24.841664 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:55:24.841672 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:55:24.841679 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:55:24.841689 kernel: active return thunk: retbleed_return_thunk Oct 13 05:55:24.841697 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:55:24.841704 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:55:24.841712 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:55:24.841719 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:55:24.841727 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:55:24.841735 kernel: active return thunk: srso_return_thunk Oct 13 05:55:24.841742 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:55:24.841752 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:55:24.841760 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:55:24.841768 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:55:24.841775 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:55:24.841782 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:55:24.841790 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:55:24.841797 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:55:24.841804 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:55:24.841812 kernel: landlock: Up and running. Oct 13 05:55:24.841821 kernel: SELinux: Initializing. Oct 13 05:55:24.841828 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:55:24.841836 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:55:24.841843 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:55:24.841851 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:55:24.841858 kernel: ... version: 0 Oct 13 05:55:24.841865 kernel: ... bit width: 48 Oct 13 05:55:24.841873 kernel: ... generic registers: 6 Oct 13 05:55:24.841880 kernel: ... value mask: 0000ffffffffffff Oct 13 05:55:24.841889 kernel: ... max period: 00007fffffffffff Oct 13 05:55:24.841897 kernel: ... fixed-purpose events: 0 Oct 13 05:55:24.841904 kernel: ... event mask: 000000000000003f Oct 13 05:55:24.841911 kernel: signal: max sigframe size: 1776 Oct 13 05:55:24.841918 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:55:24.841926 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:55:24.841933 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:55:24.841941 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:55:24.841948 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:55:24.841957 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:55:24.841965 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:55:24.841972 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 13 05:55:24.841980 kernel: Memory: 2422664K/2565800K available (14336K kernel code, 2443K rwdata, 10000K rodata, 54096K init, 2852K bss, 137200K reserved, 0K cma-reserved) Oct 13 05:55:24.841987 kernel: devtmpfs: initialized Oct 13 05:55:24.841995 kernel: x86/mm: Memory block size: 128MB Oct 13 05:55:24.842002 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 13 05:55:24.842010 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 13 05:55:24.842026 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 13 05:55:24.842045 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 13 05:55:24.842053 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 13 05:55:24.842060 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 13 05:55:24.842068 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:55:24.842075 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:55:24.842083 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:55:24.842090 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:55:24.842098 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:55:24.842105 kernel: audit: type=2000 audit(1760334922.468:1): state=initialized audit_enabled=0 res=1 Oct 13 05:55:24.842114 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:55:24.842122 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:55:24.842141 kernel: cpuidle: using governor menu Oct 13 05:55:24.842148 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:55:24.842156 kernel: dca service started, version 1.12.1 Oct 13 05:55:24.842163 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 13 05:55:24.842171 kernel: PCI: Using configuration type 1 for base access Oct 13 05:55:24.842178 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:55:24.842186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:55:24.842195 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:55:24.842203 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:55:24.842210 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:55:24.842217 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:55:24.842225 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:55:24.842232 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:55:24.842239 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:55:24.842247 kernel: ACPI: Interpreter enabled Oct 13 05:55:24.842254 kernel: ACPI: PM: (supports S0 S3 S5) Oct 13 05:55:24.842263 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:55:24.842271 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:55:24.842278 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:55:24.842286 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:55:24.842293 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:55:24.842484 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:55:24.842616 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:55:24.842738 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:55:24.842748 kernel: PCI host bridge to bus 0000:00 Oct 13 05:55:24.842867 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:55:24.842975 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:55:24.843081 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:55:24.843265 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 13 05:55:24.843390 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 13 05:55:24.843501 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:55:24.843607 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:55:24.843738 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:55:24.843866 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:55:24.843982 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 13 05:55:24.844098 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 13 05:55:24.844238 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 13 05:55:24.844366 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:55:24.844494 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:55:24.844612 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 13 05:55:24.844728 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 13 05:55:24.844845 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 13 05:55:24.844969 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:55:24.845090 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 13 05:55:24.845225 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 13 05:55:24.845345 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 13 05:55:24.845483 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:55:24.845601 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 13 05:55:24.845717 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 13 05:55:24.845832 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 13 05:55:24.845951 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 13 05:55:24.846088 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:55:24.846226 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:55:24.846365 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:55:24.846483 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 13 05:55:24.846599 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 13 05:55:24.846722 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:55:24.846842 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 13 05:55:24.846853 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:55:24.846860 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:55:24.846868 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:55:24.846875 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:55:24.846883 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:55:24.846891 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:55:24.846908 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:55:24.846926 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:55:24.846934 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:55:24.846942 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:55:24.846957 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:55:24.846965 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:55:24.846972 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:55:24.846980 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:55:24.846987 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:55:24.846995 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:55:24.847010 kernel: iommu: Default domain type: Translated Oct 13 05:55:24.847017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:55:24.847025 kernel: efivars: Registered efivars operations Oct 13 05:55:24.847032 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:55:24.847040 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:55:24.847047 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 13 05:55:24.847055 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 13 05:55:24.847062 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Oct 13 05:55:24.847069 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Oct 13 05:55:24.847079 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 13 05:55:24.847086 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 13 05:55:24.847094 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 13 05:55:24.847101 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 13 05:55:24.847235 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:55:24.847361 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:55:24.847479 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:55:24.847489 kernel: vgaarb: loaded Oct 13 05:55:24.847500 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:55:24.847508 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:55:24.847515 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:55:24.847523 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:55:24.847531 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:55:24.847539 kernel: pnp: PnP ACPI init Oct 13 05:55:24.847681 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 13 05:55:24.847694 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:55:24.847704 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:55:24.847712 kernel: NET: Registered PF_INET protocol family Oct 13 05:55:24.847720 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:55:24.847730 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:55:24.847737 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:55:24.847745 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:55:24.847753 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:55:24.847761 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:55:24.847769 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:55:24.847779 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:55:24.847787 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:55:24.847794 kernel: NET: Registered PF_XDP protocol family Oct 13 05:55:24.847916 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 13 05:55:24.848034 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 13 05:55:24.848159 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:55:24.848267 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:55:24.848381 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:55:24.848492 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 13 05:55:24.848598 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 13 05:55:24.848703 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:55:24.848713 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:55:24.848721 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:55:24.848729 kernel: Initialise system trusted keyrings Oct 13 05:55:24.848740 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:55:24.848748 kernel: Key type asymmetric registered Oct 13 05:55:24.848756 kernel: Asymmetric key parser 'x509' registered Oct 13 05:55:24.848764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:55:24.848772 kernel: io scheduler mq-deadline registered Oct 13 05:55:24.848780 kernel: io scheduler kyber registered Oct 13 05:55:24.848787 kernel: io scheduler bfq registered Oct 13 05:55:24.848795 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:55:24.848806 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:55:24.848814 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:55:24.848822 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:55:24.848829 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:55:24.848838 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:55:24.848846 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:55:24.848853 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:55:24.848861 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:55:24.848980 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:55:24.848994 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:55:24.849103 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:55:24.849239 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:55:24 UTC (1760334924) Oct 13 05:55:24.849349 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 13 05:55:24.849368 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:55:24.849375 kernel: efifb: probing for efifb Oct 13 05:55:24.849383 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 13 05:55:24.849391 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 13 05:55:24.849402 kernel: efifb: scrolling: redraw Oct 13 05:55:24.849410 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:55:24.849418 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 05:55:24.849426 kernel: fb0: EFI VGA frame buffer device Oct 13 05:55:24.849434 kernel: pstore: Using crash dump compression: deflate Oct 13 05:55:24.849442 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:55:24.849449 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:55:24.849457 kernel: Segment Routing with IPv6 Oct 13 05:55:24.849465 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:55:24.849475 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:55:24.849483 kernel: Key type dns_resolver registered Oct 13 05:55:24.849491 kernel: IPI shorthand broadcast: enabled Oct 13 05:55:24.849498 kernel: sched_clock: Marking stable (2819001998, 289858755)->(3164019726, -55158973) Oct 13 05:55:24.849506 kernel: registered taskstats version 1 Oct 13 05:55:24.849514 kernel: Loading compiled-in X.509 certificates Oct 13 05:55:24.849522 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: d8dbf4abead15098249886d373d42a3af4f50ccd' Oct 13 05:55:24.849529 kernel: Demotion targets for Node 0: null Oct 13 05:55:24.849537 kernel: Key type .fscrypt registered Oct 13 05:55:24.849547 kernel: Key type fscrypt-provisioning registered Oct 13 05:55:24.849555 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:55:24.849562 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:55:24.849570 kernel: ima: No architecture policies found Oct 13 05:55:24.849580 kernel: clk: Disabling unused clocks Oct 13 05:55:24.849588 kernel: Warning: unable to open an initial console. Oct 13 05:55:24.849596 kernel: Freeing unused kernel image (initmem) memory: 54096K Oct 13 05:55:24.849604 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:55:24.849612 kernel: Freeing unused kernel image (rodata/data gap) memory: 240K Oct 13 05:55:24.849622 kernel: Run /init as init process Oct 13 05:55:24.849629 kernel: with arguments: Oct 13 05:55:24.849637 kernel: /init Oct 13 05:55:24.849645 kernel: with environment: Oct 13 05:55:24.849652 kernel: HOME=/ Oct 13 05:55:24.849660 kernel: TERM=linux Oct 13 05:55:24.849668 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:55:24.849676 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:55:24.849688 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:55:24.849698 systemd[1]: Detected virtualization kvm. Oct 13 05:55:24.849707 systemd[1]: Detected architecture x86-64. Oct 13 05:55:24.849715 systemd[1]: Running in initrd. Oct 13 05:55:24.849723 systemd[1]: No hostname configured, using default hostname. Oct 13 05:55:24.849731 systemd[1]: Hostname set to . Oct 13 05:55:24.849740 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:55:24.849748 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:55:24.849758 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:55:24.849767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:55:24.849776 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:55:24.849784 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:55:24.849793 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:55:24.849802 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:55:24.849811 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 05:55:24.849822 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 05:55:24.849830 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:55:24.849839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:55:24.849847 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:55:24.849855 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:55:24.849863 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:55:24.849872 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:55:24.849880 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:55:24.849890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:55:24.849898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:55:24.849907 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:55:24.849915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:55:24.849923 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:55:24.849932 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:55:24.849940 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:55:24.849949 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:55:24.849957 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:55:24.849967 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:55:24.849976 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:55:24.849984 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:55:24.849992 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:55:24.850001 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:55:24.850009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:24.850017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:55:24.850028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:55:24.850056 systemd-journald[220]: Collecting audit messages is disabled. Oct 13 05:55:24.850077 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:55:24.850086 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:55:24.850095 systemd-journald[220]: Journal started Oct 13 05:55:24.850113 systemd-journald[220]: Runtime Journal (/run/log/journal/12b27228d8364a1a848a75cf17cd78ee) is 6M, max 48.4M, 42.4M free. Oct 13 05:55:24.842063 systemd-modules-load[222]: Inserted module 'overlay' Oct 13 05:55:24.852926 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:55:24.854963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:24.860250 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:55:24.872145 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:55:24.874820 systemd-modules-load[222]: Inserted module 'br_netfilter' Oct 13 05:55:24.876423 kernel: Bridge firewalling registered Oct 13 05:55:24.876923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:55:24.879459 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:55:24.880158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:55:24.882879 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:55:24.884069 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:55:24.899855 systemd-tmpfiles[239]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:55:24.901645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:55:24.904007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:55:24.905596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:55:24.908233 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:55:24.910064 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:55:24.915670 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:55:24.935139 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:55:24.953931 systemd-resolved[260]: Positive Trust Anchors: Oct 13 05:55:24.953948 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:55:24.953977 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:55:24.956385 systemd-resolved[260]: Defaulting to hostname 'linux'. Oct 13 05:55:24.957696 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:55:24.969221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:55:25.058158 kernel: SCSI subsystem initialized Oct 13 05:55:25.067151 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:55:25.078157 kernel: iscsi: registered transport (tcp) Oct 13 05:55:25.098550 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:55:25.098585 kernel: QLogic iSCSI HBA Driver Oct 13 05:55:25.118989 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:55:25.145429 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:55:25.148444 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:55:25.202112 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:55:25.203645 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:55:25.266152 kernel: raid6: avx2x4 gen() 30268 MB/s Oct 13 05:55:25.283143 kernel: raid6: avx2x2 gen() 30900 MB/s Oct 13 05:55:25.300863 kernel: raid6: avx2x1 gen() 25851 MB/s Oct 13 05:55:25.300877 kernel: raid6: using algorithm avx2x2 gen() 30900 MB/s Oct 13 05:55:25.318875 kernel: raid6: .... xor() 19885 MB/s, rmw enabled Oct 13 05:55:25.318898 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:55:25.339160 kernel: xor: automatically using best checksumming function avx Oct 13 05:55:25.499156 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:55:25.507631 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:55:25.512575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:55:25.548726 systemd-udevd[470]: Using default interface naming scheme 'v255'. Oct 13 05:55:25.555726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:55:25.560461 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:55:25.585816 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Oct 13 05:55:25.615520 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:55:25.617008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:55:25.694037 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:55:25.699462 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:55:25.735158 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:55:25.738512 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:55:25.742378 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 13 05:55:25.748148 kernel: AES CTR mode by8 optimization enabled Oct 13 05:55:25.764193 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:55:25.769982 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:55:25.770024 kernel: GPT:9289727 != 19775487 Oct 13 05:55:25.770035 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:55:25.770050 kernel: GPT:9289727 != 19775487 Oct 13 05:55:25.770059 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:55:25.770069 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:25.771310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:55:25.771499 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:25.780256 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:25.783023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:25.795152 kernel: libata version 3.00 loaded. Oct 13 05:55:25.796960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:55:25.801586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:25.806725 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:55:25.808781 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:55:25.808797 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:55:25.808940 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:55:25.810602 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:55:25.813151 kernel: scsi host0: ahci Oct 13 05:55:25.813357 kernel: scsi host1: ahci Oct 13 05:55:25.817560 kernel: scsi host2: ahci Oct 13 05:55:25.818007 kernel: scsi host3: ahci Oct 13 05:55:25.821879 kernel: scsi host4: ahci Oct 13 05:55:25.822104 kernel: scsi host5: ahci Oct 13 05:55:25.822274 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Oct 13 05:55:25.822285 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Oct 13 05:55:25.824327 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Oct 13 05:55:25.824360 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Oct 13 05:55:25.824810 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:55:25.833028 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Oct 13 05:55:25.833050 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Oct 13 05:55:25.835256 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:55:25.843455 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:55:25.861572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:55:25.870296 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:55:25.870395 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 05:55:25.879595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:55:25.880386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:25.904224 disk-uuid[630]: Primary Header is updated. Oct 13 05:55:25.904224 disk-uuid[630]: Secondary Entries is updated. Oct 13 05:55:25.904224 disk-uuid[630]: Secondary Header is updated. Oct 13 05:55:25.908154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:25.912154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:25.917573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:26.141513 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:26.141568 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:26.142150 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:26.143165 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:55:26.146160 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:26.146177 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:26.147163 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:55:26.148631 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:55:26.148644 kernel: ata3.00: applying bridge limits Oct 13 05:55:26.150539 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:55:26.150552 kernel: ata3.00: configured for UDMA/100 Oct 13 05:55:26.152168 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:55:26.211534 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:55:26.211824 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:55:26.232145 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:55:26.649783 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:55:26.652210 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:55:26.655225 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:55:26.657278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:55:26.661776 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:55:26.689472 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:55:26.913150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:26.913295 disk-uuid[633]: The operation has completed successfully. Oct 13 05:55:26.943073 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:55:26.943209 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:55:26.978667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 05:55:26.991439 sh[664]: Success Oct 13 05:55:27.010161 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:55:27.010192 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:55:27.012166 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:55:27.021150 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Oct 13 05:55:27.049415 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:55:27.054651 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 05:55:27.068866 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 05:55:27.075331 kernel: BTRFS: device fsid c8746500-26f5-4ec1-9da8-aef51ec7db92 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (676) Oct 13 05:55:27.078557 kernel: BTRFS info (device dm-0): first mount of filesystem c8746500-26f5-4ec1-9da8-aef51ec7db92 Oct 13 05:55:27.078579 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:27.084335 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:55:27.084358 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:55:27.085635 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 05:55:27.088509 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:55:27.092098 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:55:27.095552 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:55:27.098784 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:55:27.138685 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Oct 13 05:55:27.138724 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:27.138735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:27.143528 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:55:27.143551 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:55:27.149141 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:27.149730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:55:27.152638 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:55:27.234803 ignition[759]: Ignition 2.22.0 Oct 13 05:55:27.234815 ignition[759]: Stage: fetch-offline Oct 13 05:55:27.234853 ignition[759]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:27.234862 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:27.234946 ignition[759]: parsed url from cmdline: "" Oct 13 05:55:27.240628 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:55:27.234961 ignition[759]: no config URL provided Oct 13 05:55:27.245996 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:55:27.234966 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:55:27.234976 ignition[759]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:55:27.234997 ignition[759]: op(1): [started] loading QEMU firmware config module Oct 13 05:55:27.235003 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:55:27.243357 ignition[759]: op(1): [finished] loading QEMU firmware config module Oct 13 05:55:27.291953 systemd-networkd[853]: lo: Link UP Oct 13 05:55:27.291962 systemd-networkd[853]: lo: Gained carrier Oct 13 05:55:27.293445 systemd-networkd[853]: Enumeration completed Oct 13 05:55:27.293781 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:27.293785 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:55:27.293986 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:55:27.294343 systemd-networkd[853]: eth0: Link UP Oct 13 05:55:27.294528 systemd-networkd[853]: eth0: Gained carrier Oct 13 05:55:27.294536 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:27.297030 systemd[1]: Reached target network.target - Network. Oct 13 05:55:27.324159 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:55:27.345685 ignition[759]: parsing config with SHA512: 784e207a0034f9bc2fc2731f19929c368d66f88b95dfe0d0370cadc6459e2d078d44e04717015df559c477ae190621e6932ca027c14c036fb9669a6cb80e45b7 Oct 13 05:55:27.351363 unknown[759]: fetched base config from "system" Oct 13 05:55:27.351373 unknown[759]: fetched user config from "qemu" Oct 13 05:55:27.351687 ignition[759]: fetch-offline: fetch-offline passed Oct 13 05:55:27.355047 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:55:27.351737 ignition[759]: Ignition finished successfully Oct 13 05:55:27.355695 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:55:27.356571 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:55:27.396392 ignition[858]: Ignition 2.22.0 Oct 13 05:55:27.396403 ignition[858]: Stage: kargs Oct 13 05:55:27.396525 ignition[858]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:27.396534 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:27.397200 ignition[858]: kargs: kargs passed Oct 13 05:55:27.397240 ignition[858]: Ignition finished successfully Oct 13 05:55:27.404407 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:55:27.408853 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:55:27.435567 ignition[866]: Ignition 2.22.0 Oct 13 05:55:27.435578 ignition[866]: Stage: disks Oct 13 05:55:27.435694 ignition[866]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:27.435703 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:27.436369 ignition[866]: disks: disks passed Oct 13 05:55:27.440378 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:55:27.436409 ignition[866]: Ignition finished successfully Oct 13 05:55:27.443675 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:55:27.446888 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:55:27.447508 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:55:27.451723 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:55:27.452566 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:55:27.462717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:55:27.499756 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 13 05:55:27.506836 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:55:27.508217 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:55:27.618154 kernel: EXT4-fs (vda9): mounted filesystem 8b520359-9763-45f3-b7f7-db1e9fbc640d r/w with ordered data mode. Quota mode: none. Oct 13 05:55:27.618593 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:55:27.619201 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:55:27.624100 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:55:27.626694 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:55:27.628924 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:55:27.628962 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:55:27.647094 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Oct 13 05:55:27.647115 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:27.647139 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:27.647150 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:55:27.647160 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:55:27.628983 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:55:27.635234 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:55:27.647864 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:55:27.651524 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:55:27.681261 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:55:27.685751 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:55:27.689960 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:55:27.693087 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:55:27.776143 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:55:27.780522 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:55:27.782807 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:55:27.798212 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:27.819263 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:55:27.836177 ignition[998]: INFO : Ignition 2.22.0 Oct 13 05:55:27.836177 ignition[998]: INFO : Stage: mount Oct 13 05:55:27.842135 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:27.842135 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:27.842135 ignition[998]: INFO : mount: mount passed Oct 13 05:55:27.842135 ignition[998]: INFO : Ignition finished successfully Oct 13 05:55:27.839862 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:55:27.843164 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:55:28.076898 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:55:28.078339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:55:28.105433 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Oct 13 05:55:28.105484 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:28.105495 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:28.110473 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:55:28.110493 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:55:28.112153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:55:28.151109 ignition[1028]: INFO : Ignition 2.22.0 Oct 13 05:55:28.151109 ignition[1028]: INFO : Stage: files Oct 13 05:55:28.153782 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:28.153782 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:28.153782 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:55:28.153782 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:55:28.153782 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:55:28.163674 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:55:28.163674 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:55:28.163674 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:55:28.163674 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 13 05:55:28.163674 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 13 05:55:28.155045 unknown[1028]: wrote ssh authorized keys file for user: core Oct 13 05:55:28.199375 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:55:28.281779 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:55:28.285291 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:55:28.309302 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:55:28.309302 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:55:28.309302 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:55:28.309302 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:55:28.309302 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:55:28.309302 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 13 05:55:28.551575 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:55:28.883615 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 13 05:55:28.883615 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:55:28.889370 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:55:28.893506 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:55:28.893506 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:55:28.893506 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:55:28.901306 ignition[1028]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:55:28.901306 ignition[1028]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:55:28.901306 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:55:28.901306 ignition[1028]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:55:28.913710 ignition[1028]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:55:28.920455 ignition[1028]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:55:28.923061 ignition[1028]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:55:28.923061 ignition[1028]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:55:28.927556 ignition[1028]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:55:28.927556 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:55:28.927556 ignition[1028]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:55:28.927556 ignition[1028]: INFO : files: files passed Oct 13 05:55:28.927556 ignition[1028]: INFO : Ignition finished successfully Oct 13 05:55:28.931049 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:55:28.937906 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:55:28.939046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:55:28.958261 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:55:28.958397 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:55:28.963230 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:55:28.967330 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:55:28.967330 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:55:28.972295 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:55:28.975621 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:55:28.977759 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:55:28.983085 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:55:29.027323 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:55:29.027457 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:55:29.029207 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:55:29.032769 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:55:29.033622 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:55:29.038847 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:55:29.072797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:55:29.076647 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:55:29.105395 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:55:29.107367 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:55:29.109191 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:55:29.112715 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:55:29.112822 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:55:29.120527 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:55:29.120673 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:55:29.123725 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:55:29.126428 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:55:29.126990 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:55:29.133071 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:55:29.136466 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:55:29.136991 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:55:29.142782 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:55:29.146603 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:55:29.149675 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:55:29.152660 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:55:29.152765 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:55:29.158097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:55:29.159808 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:55:29.163026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:55:29.168016 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:55:29.168161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:55:29.168278 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:55:29.175076 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:55:29.175206 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:55:29.176848 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:55:29.177619 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:55:29.186204 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:55:29.186371 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:55:29.190522 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:55:29.193520 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:55:29.193607 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:55:29.196029 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:55:29.196113 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:55:29.198783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:55:29.198894 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:55:29.199588 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:55:29.199685 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:55:29.207800 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:55:29.212936 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:55:29.217025 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:55:29.217194 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:55:29.218721 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:55:29.218817 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:55:29.230775 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:55:29.230890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:55:29.240931 ignition[1084]: INFO : Ignition 2.22.0 Oct 13 05:55:29.240931 ignition[1084]: INFO : Stage: umount Oct 13 05:55:29.243536 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:29.243536 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:29.243536 ignition[1084]: INFO : umount: umount passed Oct 13 05:55:29.243536 ignition[1084]: INFO : Ignition finished successfully Oct 13 05:55:29.244976 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:55:29.245159 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:55:29.247050 systemd[1]: Stopped target network.target - Network. Oct 13 05:55:29.248894 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:55:29.248962 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:55:29.252921 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:55:29.252967 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:55:29.255885 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:55:29.255938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:55:29.257548 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:55:29.257593 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:55:29.258561 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:55:29.264994 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:55:29.267458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:55:29.280052 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:55:29.280209 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:55:29.285046 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 05:55:29.285370 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:55:29.285417 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:55:29.291156 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:55:29.291444 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:55:29.291564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:55:29.296414 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 05:55:29.296871 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:55:29.297643 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:55:29.297692 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:55:29.302331 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:55:29.304515 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:55:29.304571 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:55:29.320488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:55:29.320549 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:55:29.325252 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:55:29.325307 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:55:29.328556 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:55:29.335320 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 05:55:29.351926 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:55:29.352106 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:55:29.371149 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:55:29.371196 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:55:29.371775 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:55:29.371807 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:55:29.379808 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:55:29.379854 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:55:29.386281 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:55:29.386359 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:55:29.392276 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:55:29.392353 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:55:29.400232 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:55:29.400308 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:55:29.400362 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:55:29.407300 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:55:29.407354 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:55:29.412923 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:55:29.412970 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:55:29.420786 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:55:29.420841 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:55:29.424245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:55:29.424294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:29.430041 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:55:29.430162 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:55:29.431357 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:55:29.431478 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:55:29.436955 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:55:29.437069 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:55:29.442352 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:55:29.443471 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:55:29.443549 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:55:29.449078 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:55:29.467017 systemd[1]: Switching root. Oct 13 05:55:29.498937 systemd-journald[220]: Journal stopped Oct 13 05:55:30.644696 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Oct 13 05:55:30.644769 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:55:30.644786 kernel: SELinux: policy capability open_perms=1 Oct 13 05:55:30.644800 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:55:30.644812 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:55:30.644824 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:55:30.644835 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:55:30.644846 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:55:30.644863 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:55:30.644880 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:55:30.644891 kernel: audit: type=1403 audit(1760334929.844:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:55:30.644904 systemd[1]: Successfully loaded SELinux policy in 65.185ms. Oct 13 05:55:30.644931 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.108ms. Oct 13 05:55:30.644944 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:55:30.644956 systemd[1]: Detected virtualization kvm. Oct 13 05:55:30.644968 systemd[1]: Detected architecture x86-64. Oct 13 05:55:30.644979 systemd[1]: Detected first boot. Oct 13 05:55:30.644991 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:55:30.645003 zram_generator::config[1132]: No configuration found. Oct 13 05:55:30.645016 kernel: Guest personality initialized and is inactive Oct 13 05:55:30.645029 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:55:30.645040 kernel: Initialized host personality Oct 13 05:55:30.645051 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:55:30.645063 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:55:30.645081 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 05:55:30.645097 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:55:30.645111 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:55:30.645139 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:55:30.645152 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:55:30.645166 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:55:30.645178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:55:30.645189 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:55:30.645210 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:55:30.645222 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:55:30.645234 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:55:30.645246 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:55:30.645258 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:55:30.645270 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:55:30.645284 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:55:30.645296 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:55:30.645308 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:55:30.645320 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:55:30.645331 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:55:30.645343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:55:30.645355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:55:30.645368 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:55:30.645381 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:55:30.645393 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:55:30.645405 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:55:30.645417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:55:30.645429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:55:30.645440 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:55:30.645452 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:55:30.645464 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:55:30.645476 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:55:30.645490 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:55:30.645501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:55:30.645513 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:55:30.645525 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:55:30.645536 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:55:30.645548 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:55:30.645560 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:55:30.645572 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:55:30.645584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:30.645598 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:55:30.645609 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:55:30.645621 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:55:30.645634 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:55:30.645646 systemd[1]: Reached target machines.target - Containers. Oct 13 05:55:30.645658 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:55:30.645670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:55:30.645683 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:55:30.645697 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:55:30.645709 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:55:30.645721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:55:30.645733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:55:30.645744 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:55:30.645756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:55:30.645768 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:55:30.645780 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:55:30.645794 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:55:30.645806 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:55:30.645818 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:55:30.645830 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:55:30.645842 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:55:30.645854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:55:30.645865 kernel: loop: module loaded Oct 13 05:55:30.645876 kernel: fuse: init (API version 7.41) Oct 13 05:55:30.645888 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:55:30.645902 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:55:30.645913 kernel: ACPI: bus type drm_connector registered Oct 13 05:55:30.645925 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:55:30.645937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:55:30.645948 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 05:55:30.645962 systemd[1]: Stopped verity-setup.service. Oct 13 05:55:30.645994 systemd-journald[1208]: Collecting audit messages is disabled. Oct 13 05:55:30.646016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:30.646028 systemd-journald[1208]: Journal started Oct 13 05:55:30.646054 systemd-journald[1208]: Runtime Journal (/run/log/journal/12b27228d8364a1a848a75cf17cd78ee) is 6M, max 48.4M, 42.4M free. Oct 13 05:55:30.352950 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:55:30.371891 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:55:30.372350 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:55:30.651149 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:55:30.653464 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:55:30.655290 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:55:30.657161 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:55:30.658829 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:55:30.660668 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:55:30.662532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:55:30.664397 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:55:30.666575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:55:30.668849 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:55:30.669075 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:55:30.671313 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:55:30.671529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:55:30.673613 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:55:30.673813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:55:30.675788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:55:30.676008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:55:30.678239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:55:30.678458 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:55:30.680459 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:55:30.680674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:55:30.682734 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:55:30.684880 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:55:30.687564 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:55:30.690046 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:55:30.703600 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:55:30.706681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:55:30.709553 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:55:30.711327 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:55:30.711357 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:55:30.713989 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:55:30.720064 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:55:30.722402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:55:30.723834 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:55:30.728236 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:55:30.730393 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:55:30.731618 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:55:30.733793 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:55:30.741004 systemd-journald[1208]: Time spent on flushing to /var/log/journal/12b27228d8364a1a848a75cf17cd78ee is 29.579ms for 1066 entries. Oct 13 05:55:30.741004 systemd-journald[1208]: System Journal (/var/log/journal/12b27228d8364a1a848a75cf17cd78ee) is 8M, max 195.6M, 187.6M free. Oct 13 05:55:30.799483 systemd-journald[1208]: Received client request to flush runtime journal. Oct 13 05:55:30.799554 kernel: loop0: detected capacity change from 0 to 110984 Oct 13 05:55:30.799586 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:55:30.734766 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:55:30.738253 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:55:30.742560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:55:30.747333 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:55:30.753323 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:55:30.757416 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:55:30.766656 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:55:30.771063 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:55:30.773482 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:55:30.777014 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:55:30.784262 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Oct 13 05:55:30.784275 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Oct 13 05:55:30.790730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:55:30.795216 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:55:30.807875 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:55:30.816896 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:55:30.817246 kernel: loop1: detected capacity change from 0 to 128016 Oct 13 05:55:30.836676 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:55:30.840500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:55:30.844262 kernel: loop2: detected capacity change from 0 to 224512 Oct 13 05:55:30.865738 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Oct 13 05:55:30.865755 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Oct 13 05:55:30.870829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:55:30.876177 kernel: loop3: detected capacity change from 0 to 110984 Oct 13 05:55:30.886169 kernel: loop4: detected capacity change from 0 to 128016 Oct 13 05:55:30.897419 kernel: loop5: detected capacity change from 0 to 224512 Oct 13 05:55:30.905473 (sd-merge)[1276]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 13 05:55:30.906345 (sd-merge)[1276]: Merged extensions into '/usr'. Oct 13 05:55:30.912068 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:55:30.912182 systemd[1]: Reloading... Oct 13 05:55:30.973160 zram_generator::config[1302]: No configuration found. Oct 13 05:55:31.062797 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:55:31.161931 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:55:31.162058 systemd[1]: Reloading finished in 249 ms. Oct 13 05:55:31.191289 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:55:31.193666 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:55:31.208960 systemd[1]: Starting ensure-sysext.service... Oct 13 05:55:31.211583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:55:31.236204 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:55:31.242892 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:55:31.242927 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:55:31.243003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:55:31.243329 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:55:31.243588 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:55:31.244482 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:55:31.244753 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Oct 13 05:55:31.244829 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Oct 13 05:55:31.245385 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:55:31.245399 systemd[1]: Reloading... Oct 13 05:55:31.248989 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:55:31.249000 systemd-tmpfiles[1340]: Skipping /boot Oct 13 05:55:31.258640 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:55:31.258772 systemd-tmpfiles[1340]: Skipping /boot Oct 13 05:55:31.289790 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Oct 13 05:55:31.300153 zram_generator::config[1368]: No configuration found. Oct 13 05:55:31.459153 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:55:31.459238 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:55:31.467164 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:55:31.481016 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 13 05:55:31.481283 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:55:31.484145 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:55:31.541781 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:55:31.545656 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:55:31.546268 systemd[1]: Reloading finished in 300 ms. Oct 13 05:55:31.564590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:55:31.588070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:55:31.629651 systemd[1]: Finished ensure-sysext.service. Oct 13 05:55:31.642306 kernel: kvm_amd: TSC scaling supported Oct 13 05:55:31.642350 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:55:31.642372 kernel: kvm_amd: Nested Paging enabled Oct 13 05:55:31.643862 kernel: kvm_amd: LBR virtualization supported Oct 13 05:55:31.643878 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:55:31.644751 kernel: kvm_amd: Virtual GIF supported Oct 13 05:55:31.663000 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:31.664596 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:55:31.667635 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:55:31.670070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:55:31.677947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:55:31.680691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:55:31.684355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:55:31.687651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:55:31.691589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:55:31.692145 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:55:31.693359 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:55:31.695371 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:55:31.697166 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:55:31.701418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:55:31.705872 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:55:31.709490 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:55:31.714387 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:55:31.717624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:31.721458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:31.722537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:55:31.724325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:55:31.726862 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:55:31.727068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:55:31.729219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:55:31.729435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:55:31.732216 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:55:31.732456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:55:31.736881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:55:31.739740 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:55:31.750325 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:55:31.750431 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:55:31.751666 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:55:31.753280 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:55:31.753803 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:55:31.762691 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:55:31.762975 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:55:31.767694 augenrules[1506]: No rules Oct 13 05:55:31.770468 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:55:31.770736 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:55:31.774040 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:55:31.793746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:31.803689 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:55:31.871847 systemd-networkd[1475]: lo: Link UP Oct 13 05:55:31.872120 systemd-networkd[1475]: lo: Gained carrier Oct 13 05:55:31.873733 systemd-networkd[1475]: Enumeration completed Oct 13 05:55:31.874239 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:55:31.874414 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:31.874457 systemd-networkd[1475]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:55:31.875023 systemd-networkd[1475]: eth0: Link UP Oct 13 05:55:31.875386 systemd-networkd[1475]: eth0: Gained carrier Oct 13 05:55:31.875448 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:31.877609 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:55:31.880828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:55:31.882785 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:55:31.883972 systemd-resolved[1476]: Positive Trust Anchors: Oct 13 05:55:31.884235 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:55:31.884306 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:55:31.884875 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:55:31.887936 systemd-resolved[1476]: Defaulting to hostname 'linux'. Oct 13 05:55:31.889518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:55:31.891393 systemd[1]: Reached target network.target - Network. Oct 13 05:55:31.892190 systemd-networkd[1475]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:55:31.892808 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Oct 13 05:55:31.892960 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:55:32.871326 systemd-resolved[1476]: Clock change detected. Flushing caches. Oct 13 05:55:32.871367 systemd-timesyncd[1477]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:55:32.871419 systemd-timesyncd[1477]: Initial clock synchronization to Mon 2025-10-13 05:55:32.871288 UTC. Oct 13 05:55:32.872283 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:55:32.874160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:55:32.876227 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:55:32.878296 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:55:32.880387 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:55:32.882308 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:55:32.884417 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:55:32.886507 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:55:32.886538 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:55:32.888082 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:55:32.890509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:55:32.894079 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:55:32.897619 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:55:32.899811 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:55:32.901846 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:55:32.908680 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:55:32.910660 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:55:32.913322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:55:32.916083 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:55:32.917653 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:55:32.919243 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:55:32.919273 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:55:32.920209 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:55:32.922831 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:55:32.926463 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:55:32.927472 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:55:32.937622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:55:32.939387 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:55:32.941279 jq[1533]: false Oct 13 05:55:32.941651 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:55:32.944621 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:55:32.947273 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:55:32.951448 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:55:32.954277 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:55:32.954843 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Oct 13 05:55:32.954851 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Oct 13 05:55:32.957292 extend-filesystems[1534]: Found /dev/vda6 Oct 13 05:55:32.958026 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:55:32.961979 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:55:32.962237 extend-filesystems[1534]: Found /dev/vda9 Oct 13 05:55:32.962444 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:55:32.963609 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:55:32.964185 extend-filesystems[1534]: Checking size of /dev/vda9 Oct 13 05:55:32.967358 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Oct 13 05:55:32.967358 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:55:32.967358 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Oct 13 05:55:32.966673 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:55:32.964989 oslogin_cache_refresh[1535]: Failure getting users, quitting Oct 13 05:55:32.965007 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:55:32.965058 oslogin_cache_refresh[1535]: Refreshing group entry cache Oct 13 05:55:32.973578 extend-filesystems[1534]: Resized partition /dev/vda9 Oct 13 05:55:32.975310 extend-filesystems[1557]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:55:32.975493 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:55:32.979352 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Oct 13 05:55:32.979352 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:55:32.976240 oslogin_cache_refresh[1535]: Failure getting groups, quitting Oct 13 05:55:32.976847 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:55:32.976251 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:55:32.982183 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:55:32.982532 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:55:32.982880 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:55:32.983662 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:55:32.984278 jq[1553]: true Oct 13 05:55:32.987350 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 13 05:55:32.987653 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:55:32.987896 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:55:32.991761 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:55:32.992005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:55:33.001762 update_engine[1551]: I20251013 05:55:33.001175 1551 main.cc:92] Flatcar Update Engine starting Oct 13 05:55:33.018654 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:55:33.043523 tar[1560]: linux-amd64/LICENSE Oct 13 05:55:33.036133 dbus-daemon[1531]: [system] SELinux support is enabled Oct 13 05:55:33.037143 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:55:33.044071 update_engine[1551]: I20251013 05:55:33.043708 1551 update_check_scheduler.cc:74] Next update check in 10m57s Oct 13 05:55:33.044102 jq[1562]: true Oct 13 05:55:33.047366 tar[1560]: linux-amd64/helm Oct 13 05:55:33.051353 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 13 05:55:33.053837 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:55:33.055870 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:55:33.055889 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:55:33.058173 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:55:33.058187 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:55:33.061669 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:55:33.072423 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:55:33.072423 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:55:33.072423 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 13 05:55:33.078520 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Oct 13 05:55:33.073598 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:55:33.076769 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:55:33.080117 systemd-logind[1545]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:55:33.080136 systemd-logind[1545]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:55:33.080648 systemd-logind[1545]: New seat seat0. Oct 13 05:55:33.082252 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:55:33.104355 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:55:33.112685 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:55:33.115524 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:55:33.118273 locksmithd[1584]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:55:33.213498 containerd[1563]: time="2025-10-13T05:55:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:55:33.214211 containerd[1563]: time="2025-10-13T05:55:33.214151983Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:55:33.222822 containerd[1563]: time="2025-10-13T05:55:33.222681676Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.799µs" Oct 13 05:55:33.222822 containerd[1563]: time="2025-10-13T05:55:33.222708867Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:55:33.222822 containerd[1563]: time="2025-10-13T05:55:33.222725579Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:55:33.222904 containerd[1563]: time="2025-10-13T05:55:33.222876061Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:55:33.222904 containerd[1563]: time="2025-10-13T05:55:33.222890758Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:55:33.222941 containerd[1563]: time="2025-10-13T05:55:33.222913160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223009 containerd[1563]: time="2025-10-13T05:55:33.222973874Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223009 containerd[1563]: time="2025-10-13T05:55:33.222989393Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223304 containerd[1563]: time="2025-10-13T05:55:33.223266152Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223304 containerd[1563]: time="2025-10-13T05:55:33.223282192Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223304 containerd[1563]: time="2025-10-13T05:55:33.223292301Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223304 containerd[1563]: time="2025-10-13T05:55:33.223300126Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223644 containerd[1563]: time="2025-10-13T05:55:33.223418488Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223644 containerd[1563]: time="2025-10-13T05:55:33.223632910Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223688 containerd[1563]: time="2025-10-13T05:55:33.223661173Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:55:33.223688 containerd[1563]: time="2025-10-13T05:55:33.223671192Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:55:33.223732 containerd[1563]: time="2025-10-13T05:55:33.223702721Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:55:33.224245 containerd[1563]: time="2025-10-13T05:55:33.224220041Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:55:33.224455 containerd[1563]: time="2025-10-13T05:55:33.224298438Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:55:33.229954 containerd[1563]: time="2025-10-13T05:55:33.229923134Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:55:33.229997 containerd[1563]: time="2025-10-13T05:55:33.229970533Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:55:33.229997 containerd[1563]: time="2025-10-13T05:55:33.229985481Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:55:33.230035 containerd[1563]: time="2025-10-13T05:55:33.229998125Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:55:33.230035 containerd[1563]: time="2025-10-13T05:55:33.230011620Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:55:33.230035 containerd[1563]: time="2025-10-13T05:55:33.230022711Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:55:33.230035 containerd[1563]: time="2025-10-13T05:55:33.230033922Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:55:33.230108 containerd[1563]: time="2025-10-13T05:55:33.230047197Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:55:33.230108 containerd[1563]: time="2025-10-13T05:55:33.230060241Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:55:33.230108 containerd[1563]: time="2025-10-13T05:55:33.230071372Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:55:33.230108 containerd[1563]: time="2025-10-13T05:55:33.230081782Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:55:33.230108 containerd[1563]: time="2025-10-13T05:55:33.230094636Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230202949Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230222916Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230235019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230248935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230258473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230273872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230288449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230298889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230309669Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230326601Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:55:33.230388 containerd[1563]: time="2025-10-13T05:55:33.230362157Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:55:33.230595 containerd[1563]: time="2025-10-13T05:55:33.230424725Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:55:33.230595 containerd[1563]: time="2025-10-13T05:55:33.230437449Z" level=info msg="Start snapshots syncer" Oct 13 05:55:33.230595 containerd[1563]: time="2025-10-13T05:55:33.230464680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:55:33.231086 containerd[1563]: time="2025-10-13T05:55:33.230683049Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:55:33.231086 containerd[1563]: time="2025-10-13T05:55:33.230738473Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:55:33.232129 containerd[1563]: time="2025-10-13T05:55:33.232105266Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:55:33.232242 containerd[1563]: time="2025-10-13T05:55:33.232217055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:55:33.232278 containerd[1563]: time="2025-10-13T05:55:33.232249957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:55:33.232278 containerd[1563]: time="2025-10-13T05:55:33.232261789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:55:33.232278 containerd[1563]: time="2025-10-13T05:55:33.232271137Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:55:33.232347 containerd[1563]: time="2025-10-13T05:55:33.232287337Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:55:33.232347 containerd[1563]: time="2025-10-13T05:55:33.232298177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:55:33.232347 containerd[1563]: time="2025-10-13T05:55:33.232308126Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:55:33.232416 containerd[1563]: time="2025-10-13T05:55:33.232341689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:55:33.232416 containerd[1563]: time="2025-10-13T05:55:33.232377917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:55:33.232416 containerd[1563]: time="2025-10-13T05:55:33.232391382Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:55:33.232469 containerd[1563]: time="2025-10-13T05:55:33.232424685Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:55:33.232469 containerd[1563]: time="2025-10-13T05:55:33.232435926Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:55:33.232469 containerd[1563]: time="2025-10-13T05:55:33.232444532Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:55:33.232469 containerd[1563]: time="2025-10-13T05:55:33.232453358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:55:33.232469 containerd[1563]: time="2025-10-13T05:55:33.232461143Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:55:33.232469 containerd[1563]: time="2025-10-13T05:55:33.232470010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:55:33.232578 containerd[1563]: time="2025-10-13T05:55:33.232480259Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:55:33.232578 containerd[1563]: time="2025-10-13T05:55:33.232513932Z" level=info msg="runtime interface created" Oct 13 05:55:33.232578 containerd[1563]: time="2025-10-13T05:55:33.232519703Z" level=info msg="created NRI interface" Oct 13 05:55:33.232578 containerd[1563]: time="2025-10-13T05:55:33.232527567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:55:33.232578 containerd[1563]: time="2025-10-13T05:55:33.232554338Z" level=info msg="Connect containerd service" Oct 13 05:55:33.232578 containerd[1563]: time="2025-10-13T05:55:33.232575798Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:55:33.233345 containerd[1563]: time="2025-10-13T05:55:33.233284507Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:55:33.280403 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:55:33.304087 containerd[1563]: time="2025-10-13T05:55:33.303989579Z" level=info msg="Start subscribing containerd event" Oct 13 05:55:33.304189 containerd[1563]: time="2025-10-13T05:55:33.304076131Z" level=info msg="Start recovering state" Oct 13 05:55:33.304189 containerd[1563]: time="2025-10-13T05:55:33.304107289Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:55:33.304189 containerd[1563]: time="2025-10-13T05:55:33.304165348Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:55:33.304189 containerd[1563]: time="2025-10-13T05:55:33.304177501Z" level=info msg="Start event monitor" Oct 13 05:55:33.304285 containerd[1563]: time="2025-10-13T05:55:33.304218438Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:55:33.304285 containerd[1563]: time="2025-10-13T05:55:33.304225942Z" level=info msg="Start streaming server" Oct 13 05:55:33.304285 containerd[1563]: time="2025-10-13T05:55:33.304248875Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:55:33.304285 containerd[1563]: time="2025-10-13T05:55:33.304263693Z" level=info msg="runtime interface starting up..." Oct 13 05:55:33.304285 containerd[1563]: time="2025-10-13T05:55:33.304273160Z" level=info msg="starting plugins..." Oct 13 05:55:33.304395 containerd[1563]: time="2025-10-13T05:55:33.304296685Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:55:33.304390 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:55:33.305182 containerd[1563]: time="2025-10-13T05:55:33.305009792Z" level=info msg="containerd successfully booted in 0.092062s" Oct 13 05:55:33.307436 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:55:33.312371 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:55:33.333317 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:55:33.333630 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:55:33.336952 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:55:33.362884 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:55:33.366217 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:55:33.368802 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:55:33.370874 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:55:33.381665 tar[1560]: linux-amd64/README.md Oct 13 05:55:33.399149 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:55:34.014558 systemd-networkd[1475]: eth0: Gained IPv6LL Oct 13 05:55:34.017869 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:55:34.020504 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:55:34.023662 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:55:34.026710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:34.029651 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:55:34.062714 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:55:34.065189 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:55:34.065470 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:55:34.068305 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:55:34.760054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:34.762308 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:55:34.764188 systemd[1]: Startup finished in 2.880s (kernel) + 5.189s (initrd) + 4.006s (userspace) = 12.076s. Oct 13 05:55:34.777706 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:55:35.168469 kubelet[1669]: E1013 05:55:35.168352 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:55:35.172255 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:55:35.172521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:55:35.172887 systemd[1]: kubelet.service: Consumed 963ms CPU time, 265.5M memory peak. Oct 13 05:55:38.587130 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:55:38.588214 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:45906.service - OpenSSH per-connection server daemon (10.0.0.1:45906). Oct 13 05:55:38.663487 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 45906 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:55:38.664957 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:38.671197 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:55:38.672287 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:55:38.678134 systemd-logind[1545]: New session 1 of user core. Oct 13 05:55:38.695202 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:55:38.698559 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:55:38.716538 (systemd)[1687]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:55:38.718996 systemd-logind[1545]: New session c1 of user core. Oct 13 05:55:38.854056 systemd[1687]: Queued start job for default target default.target. Oct 13 05:55:38.876518 systemd[1687]: Created slice app.slice - User Application Slice. Oct 13 05:55:38.876542 systemd[1687]: Reached target paths.target - Paths. Oct 13 05:55:38.876583 systemd[1687]: Reached target timers.target - Timers. Oct 13 05:55:38.877910 systemd[1687]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:55:38.888367 systemd[1687]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:55:38.888544 systemd[1687]: Reached target sockets.target - Sockets. Oct 13 05:55:38.888584 systemd[1687]: Reached target basic.target - Basic System. Oct 13 05:55:38.888623 systemd[1687]: Reached target default.target - Main User Target. Oct 13 05:55:38.888653 systemd[1687]: Startup finished in 163ms. Oct 13 05:55:38.888845 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:55:38.890278 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:55:38.954911 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:45912.service - OpenSSH per-connection server daemon (10.0.0.1:45912). Oct 13 05:55:38.994377 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 45912 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:55:38.995646 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:38.999616 systemd-logind[1545]: New session 2 of user core. Oct 13 05:55:39.013451 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:55:39.064810 sshd[1701]: Connection closed by 10.0.0.1 port 45912 Oct 13 05:55:39.065161 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:39.075821 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:45912.service: Deactivated successfully. Oct 13 05:55:39.077591 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:55:39.078272 systemd-logind[1545]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:55:39.080732 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:45924.service - OpenSSH per-connection server daemon (10.0.0.1:45924). Oct 13 05:55:39.081318 systemd-logind[1545]: Removed session 2. Oct 13 05:55:39.134200 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 45924 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:55:39.135404 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:39.139658 systemd-logind[1545]: New session 3 of user core. Oct 13 05:55:39.149489 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:55:39.198098 sshd[1710]: Connection closed by 10.0.0.1 port 45924 Oct 13 05:55:39.198465 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:39.211706 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:45924.service: Deactivated successfully. Oct 13 05:55:39.213415 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:55:39.214061 systemd-logind[1545]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:55:39.216681 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:45936.service - OpenSSH per-connection server daemon (10.0.0.1:45936). Oct 13 05:55:39.217227 systemd-logind[1545]: Removed session 3. Oct 13 05:55:39.263250 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 45936 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:55:39.264467 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:39.268429 systemd-logind[1545]: New session 4 of user core. Oct 13 05:55:39.280447 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:55:39.332284 sshd[1720]: Connection closed by 10.0.0.1 port 45936 Oct 13 05:55:39.332623 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:39.340802 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:45936.service: Deactivated successfully. Oct 13 05:55:39.342561 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:55:39.343254 systemd-logind[1545]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:55:39.345805 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:45948.service - OpenSSH per-connection server daemon (10.0.0.1:45948). Oct 13 05:55:39.346351 systemd-logind[1545]: Removed session 4. Oct 13 05:55:39.394104 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 45948 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:55:39.395292 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:39.399422 systemd-logind[1545]: New session 5 of user core. Oct 13 05:55:39.411459 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:55:39.467996 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:55:39.468304 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:39.489990 sudo[1731]: pam_unix(sudo:session): session closed for user root Oct 13 05:55:39.491481 sshd[1730]: Connection closed by 10.0.0.1 port 45948 Oct 13 05:55:39.491836 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:39.508049 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:45948.service: Deactivated successfully. Oct 13 05:55:39.509890 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:55:39.510648 systemd-logind[1545]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:55:39.513409 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:45954.service - OpenSSH per-connection server daemon (10.0.0.1:45954). Oct 13 05:55:39.513942 systemd-logind[1545]: Removed session 5. Oct 13 05:55:39.565743 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 45954 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:55:39.566921 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:39.570953 systemd-logind[1545]: New session 6 of user core. Oct 13 05:55:39.581481 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:55:39.633743 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:55:39.634048 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:39.816113 sudo[1743]: pam_unix(sudo:session): session closed for user root Oct 13 05:55:39.822637 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:55:39.822941 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:39.833585 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:55:39.884398 augenrules[1765]: No rules Oct 13 05:55:39.886114 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:55:39.886426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:55:39.887602 sudo[1742]: pam_unix(sudo:session): session closed for user root Oct 13 05:55:39.889277 sshd[1741]: Connection closed by 10.0.0.1 port 45954 Oct 13 05:55:39.889587 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:39.902759 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:45954.service: Deactivated successfully. Oct 13 05:55:39.904409 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:55:39.905092 systemd-logind[1545]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:55:39.907556 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:45964.service - OpenSSH per-connection server daemon (10.0.0.1:45964). Oct 13 05:55:39.908093 systemd-logind[1545]: Removed session 6. Oct 13 05:55:39.951345 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 45964 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:55:39.952556 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:39.956732 systemd-logind[1545]: New session 7 of user core. Oct 13 05:55:39.970456 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:55:40.021990 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:55:40.022309 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:40.312395 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:55:40.326627 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:55:40.547548 dockerd[1798]: time="2025-10-13T05:55:40.547481695Z" level=info msg="Starting up" Oct 13 05:55:40.548303 dockerd[1798]: time="2025-10-13T05:55:40.548281404Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:55:40.559969 dockerd[1798]: time="2025-10-13T05:55:40.559927571Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:55:40.739571 dockerd[1798]: time="2025-10-13T05:55:40.739518854Z" level=info msg="Loading containers: start." Oct 13 05:55:40.749359 kernel: Initializing XFRM netlink socket Oct 13 05:55:40.985122 systemd-networkd[1475]: docker0: Link UP Oct 13 05:55:40.990320 dockerd[1798]: time="2025-10-13T05:55:40.990243713Z" level=info msg="Loading containers: done." Oct 13 05:55:41.006636 dockerd[1798]: time="2025-10-13T05:55:41.006591044Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:55:41.006800 dockerd[1798]: time="2025-10-13T05:55:41.006663280Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:55:41.006800 dockerd[1798]: time="2025-10-13T05:55:41.006749862Z" level=info msg="Initializing buildkit" Oct 13 05:55:41.035049 dockerd[1798]: time="2025-10-13T05:55:41.035010952Z" level=info msg="Completed buildkit initialization" Oct 13 05:55:41.040746 dockerd[1798]: time="2025-10-13T05:55:41.040698556Z" level=info msg="Daemon has completed initialization" Oct 13 05:55:41.040998 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:55:41.041366 dockerd[1798]: time="2025-10-13T05:55:41.040858416Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:55:41.695141 containerd[1563]: time="2025-10-13T05:55:41.695109607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 13 05:55:42.277873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3388243541.mount: Deactivated successfully. Oct 13 05:55:43.119411 containerd[1563]: time="2025-10-13T05:55:43.119355858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:43.120180 containerd[1563]: time="2025-10-13T05:55:43.120132955Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Oct 13 05:55:43.121630 containerd[1563]: time="2025-10-13T05:55:43.121597301Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:43.123955 containerd[1563]: time="2025-10-13T05:55:43.123927610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:43.125129 containerd[1563]: time="2025-10-13T05:55:43.125074681Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.429932343s" Oct 13 05:55:43.125129 containerd[1563]: time="2025-10-13T05:55:43.125114265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 13 05:55:43.125785 containerd[1563]: time="2025-10-13T05:55:43.125738115Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 13 05:55:44.417283 containerd[1563]: time="2025-10-13T05:55:44.417236354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:44.418155 containerd[1563]: time="2025-10-13T05:55:44.418120211Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Oct 13 05:55:44.419460 containerd[1563]: time="2025-10-13T05:55:44.419417643Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:44.422027 containerd[1563]: time="2025-10-13T05:55:44.421975179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:44.422908 containerd[1563]: time="2025-10-13T05:55:44.422885035Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.297108347s" Oct 13 05:55:44.422947 containerd[1563]: time="2025-10-13T05:55:44.422912446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 13 05:55:44.423428 containerd[1563]: time="2025-10-13T05:55:44.423323046Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 13 05:55:45.232600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:55:45.236473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:45.493357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:45.496981 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:55:45.867705 kubelet[2088]: E1013 05:55:45.867574 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:55:45.870643 containerd[1563]: time="2025-10-13T05:55:45.870603476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:45.871361 containerd[1563]: time="2025-10-13T05:55:45.871319389Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Oct 13 05:55:45.872594 containerd[1563]: time="2025-10-13T05:55:45.872563091Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:45.873751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:55:45.873942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:55:45.874300 systemd[1]: kubelet.service: Consumed 217ms CPU time, 111.6M memory peak. Oct 13 05:55:45.875428 containerd[1563]: time="2025-10-13T05:55:45.875400541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:45.876246 containerd[1563]: time="2025-10-13T05:55:45.876217713Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.452794249s" Oct 13 05:55:45.876285 containerd[1563]: time="2025-10-13T05:55:45.876245375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 13 05:55:45.876710 containerd[1563]: time="2025-10-13T05:55:45.876689778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 13 05:55:46.957081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760798954.mount: Deactivated successfully. Oct 13 05:55:47.503417 containerd[1563]: time="2025-10-13T05:55:47.503352951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:47.504144 containerd[1563]: time="2025-10-13T05:55:47.504114599Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Oct 13 05:55:47.505349 containerd[1563]: time="2025-10-13T05:55:47.505290033Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:47.507069 containerd[1563]: time="2025-10-13T05:55:47.507037089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:47.507597 containerd[1563]: time="2025-10-13T05:55:47.507554780Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.630839765s" Oct 13 05:55:47.507631 containerd[1563]: time="2025-10-13T05:55:47.507595977Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 13 05:55:47.508170 containerd[1563]: time="2025-10-13T05:55:47.508115561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 13 05:55:48.163153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717119561.mount: Deactivated successfully. Oct 13 05:55:48.802192 containerd[1563]: time="2025-10-13T05:55:48.802132733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:48.802893 containerd[1563]: time="2025-10-13T05:55:48.802849537Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Oct 13 05:55:48.804053 containerd[1563]: time="2025-10-13T05:55:48.804019260Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:48.806612 containerd[1563]: time="2025-10-13T05:55:48.806564262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:48.807445 containerd[1563]: time="2025-10-13T05:55:48.807417933Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.299267887s" Oct 13 05:55:48.807445 containerd[1563]: time="2025-10-13T05:55:48.807442970Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 13 05:55:48.808001 containerd[1563]: time="2025-10-13T05:55:48.807969727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:55:49.359560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746531581.mount: Deactivated successfully. Oct 13 05:55:49.366547 containerd[1563]: time="2025-10-13T05:55:49.366504430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:49.367243 containerd[1563]: time="2025-10-13T05:55:49.367216415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:55:49.368410 containerd[1563]: time="2025-10-13T05:55:49.368367603Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:49.370259 containerd[1563]: time="2025-10-13T05:55:49.370228763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:49.370822 containerd[1563]: time="2025-10-13T05:55:49.370783573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 562.781485ms" Oct 13 05:55:49.370822 containerd[1563]: time="2025-10-13T05:55:49.370816335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:55:49.371230 containerd[1563]: time="2025-10-13T05:55:49.371204022Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 13 05:55:49.882466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404873547.mount: Deactivated successfully. Oct 13 05:55:51.808557 containerd[1563]: time="2025-10-13T05:55:51.808496154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:51.811159 containerd[1563]: time="2025-10-13T05:55:51.811111177Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Oct 13 05:55:51.812610 containerd[1563]: time="2025-10-13T05:55:51.812579169Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:51.815199 containerd[1563]: time="2025-10-13T05:55:51.815174255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:51.816206 containerd[1563]: time="2025-10-13T05:55:51.816155355Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.444919433s" Oct 13 05:55:51.816206 containerd[1563]: time="2025-10-13T05:55:51.816203635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 13 05:55:54.091556 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:54.091715 systemd[1]: kubelet.service: Consumed 217ms CPU time, 111.6M memory peak. Oct 13 05:55:54.093765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:54.117100 systemd[1]: Reload requested from client PID 2247 ('systemctl') (unit session-7.scope)... Oct 13 05:55:54.117113 systemd[1]: Reloading... Oct 13 05:55:54.200363 zram_generator::config[2289]: No configuration found. Oct 13 05:55:54.495033 systemd[1]: Reloading finished in 377 ms. Oct 13 05:55:54.562989 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:55:54.563086 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:55:54.563413 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:54.563458 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.2M memory peak. Oct 13 05:55:54.564853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:54.726356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:54.736616 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:55:54.771244 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:55:54.771244 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:55:54.771244 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:55:54.771576 kubelet[2337]: I1013 05:55:54.771255 2337 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:55:55.142015 kubelet[2337]: I1013 05:55:55.141940 2337 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 05:55:55.142015 kubelet[2337]: I1013 05:55:55.141970 2337 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:55:55.142223 kubelet[2337]: I1013 05:55:55.142201 2337 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 05:55:55.162921 kubelet[2337]: E1013 05:55:55.162868 2337 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:55:55.163868 kubelet[2337]: I1013 05:55:55.163844 2337 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:55:55.171396 kubelet[2337]: I1013 05:55:55.170499 2337 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:55:55.175637 kubelet[2337]: I1013 05:55:55.175608 2337 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:55:55.175868 kubelet[2337]: I1013 05:55:55.175822 2337 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:55:55.176029 kubelet[2337]: I1013 05:55:55.175854 2337 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:55:55.176029 kubelet[2337]: I1013 05:55:55.176028 2337 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:55:55.176148 kubelet[2337]: I1013 05:55:55.176037 2337 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 05:55:55.176170 kubelet[2337]: I1013 05:55:55.176165 2337 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:55:55.178510 kubelet[2337]: I1013 05:55:55.178478 2337 kubelet.go:446] "Attempting to sync node with API server" Oct 13 05:55:55.178510 kubelet[2337]: I1013 05:55:55.178509 2337 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:55:55.178772 kubelet[2337]: I1013 05:55:55.178529 2337 kubelet.go:352] "Adding apiserver pod source" Oct 13 05:55:55.178772 kubelet[2337]: I1013 05:55:55.178540 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:55:55.182264 kubelet[2337]: W1013 05:55:55.182215 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 13 05:55:55.182314 kubelet[2337]: E1013 05:55:55.182283 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:55:55.182397 kubelet[2337]: W1013 05:55:55.182241 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 13 05:55:55.182485 kubelet[2337]: E1013 05:55:55.182468 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:55:55.184158 kubelet[2337]: I1013 05:55:55.184138 2337 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:55:55.184999 kubelet[2337]: I1013 05:55:55.184971 2337 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 05:55:55.185436 kubelet[2337]: W1013 05:55:55.185411 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:55:55.187154 kubelet[2337]: I1013 05:55:55.187126 2337 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:55:55.187197 kubelet[2337]: I1013 05:55:55.187161 2337 server.go:1287] "Started kubelet" Oct 13 05:55:55.188460 kubelet[2337]: I1013 05:55:55.188436 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:55:55.189373 kubelet[2337]: I1013 05:55:55.189265 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:55:55.189635 kubelet[2337]: I1013 05:55:55.189609 2337 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:55:55.189688 kubelet[2337]: I1013 05:55:55.189667 2337 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:55:55.190931 kubelet[2337]: I1013 05:55:55.190555 2337 server.go:479] "Adding debug handlers to kubelet server" Oct 13 05:55:55.191682 kubelet[2337]: I1013 05:55:55.191656 2337 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:55:55.192310 kubelet[2337]: E1013 05:55:55.191809 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:55.192310 kubelet[2337]: I1013 05:55:55.191872 2337 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:55:55.192310 kubelet[2337]: I1013 05:55:55.192083 2337 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:55:55.192310 kubelet[2337]: I1013 05:55:55.192179 2337 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:55:55.192476 kubelet[2337]: W1013 05:55:55.192441 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 13 05:55:55.192509 kubelet[2337]: E1013 05:55:55.192480 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:55:55.193403 kubelet[2337]: E1013 05:55:55.192821 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Oct 13 05:55:55.193403 kubelet[2337]: I1013 05:55:55.193220 2337 factory.go:221] Registration of the systemd container factory successfully Oct 13 05:55:55.193403 kubelet[2337]: I1013 05:55:55.193280 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:55:55.194525 kubelet[2337]: I1013 05:55:55.194502 2337 factory.go:221] Registration of the containerd container factory successfully Oct 13 05:55:55.196360 kubelet[2337]: E1013 05:55:55.195420 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df750a9a1f5be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:55:55.18714003 +0000 UTC m=+0.447046242,LastTimestamp:2025-10-13 05:55:55.18714003 +0000 UTC m=+0.447046242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:55:55.201017 kubelet[2337]: E1013 05:55:55.200987 2337 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:55:55.206297 kubelet[2337]: I1013 05:55:55.206266 2337 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:55:55.206297 kubelet[2337]: I1013 05:55:55.206282 2337 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:55:55.206297 kubelet[2337]: I1013 05:55:55.206296 2337 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:55:55.207806 kubelet[2337]: I1013 05:55:55.207759 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 05:55:55.209041 kubelet[2337]: I1013 05:55:55.209006 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 05:55:55.209041 kubelet[2337]: I1013 05:55:55.209030 2337 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 05:55:55.209120 kubelet[2337]: I1013 05:55:55.209049 2337 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:55:55.209120 kubelet[2337]: I1013 05:55:55.209056 2337 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 05:55:55.209120 kubelet[2337]: E1013 05:55:55.209096 2337 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:55:55.215960 kubelet[2337]: W1013 05:55:55.215607 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 13 05:55:55.215960 kubelet[2337]: E1013 05:55:55.215637 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:55:55.292261 kubelet[2337]: E1013 05:55:55.292221 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:55.309532 kubelet[2337]: E1013 05:55:55.309501 2337 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:55:55.392865 kubelet[2337]: E1013 05:55:55.392765 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:55.394120 kubelet[2337]: E1013 05:55:55.394082 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Oct 13 05:55:55.493223 kubelet[2337]: E1013 05:55:55.493185 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:55.510400 kubelet[2337]: E1013 05:55:55.510373 2337 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:55:55.531350 kubelet[2337]: I1013 05:55:55.531312 2337 policy_none.go:49] "None policy: Start" Oct 13 05:55:55.531350 kubelet[2337]: I1013 05:55:55.531353 2337 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:55:55.531437 kubelet[2337]: I1013 05:55:55.531366 2337 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:55:55.536835 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:55:55.555243 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:55:55.558248 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:55:55.566166 kubelet[2337]: I1013 05:55:55.566132 2337 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 05:55:55.566609 kubelet[2337]: I1013 05:55:55.566351 2337 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:55:55.566609 kubelet[2337]: I1013 05:55:55.566371 2337 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:55:55.566609 kubelet[2337]: I1013 05:55:55.566580 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:55:55.567435 kubelet[2337]: E1013 05:55:55.567215 2337 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:55:55.567435 kubelet[2337]: E1013 05:55:55.567253 2337 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:55:55.668261 kubelet[2337]: I1013 05:55:55.668191 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:55.668616 kubelet[2337]: E1013 05:55:55.668576 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Oct 13 05:55:55.795105 kubelet[2337]: E1013 05:55:55.795080 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Oct 13 05:55:55.870036 kubelet[2337]: I1013 05:55:55.870001 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:55.870265 kubelet[2337]: E1013 05:55:55.870239 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Oct 13 05:55:55.917784 systemd[1]: Created slice kubepods-burstable-pod671522930a2e815a2f4ca1f8705b7e45.slice - libcontainer container kubepods-burstable-pod671522930a2e815a2f4ca1f8705b7e45.slice. Oct 13 05:55:55.940726 kubelet[2337]: E1013 05:55:55.940660 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:55.943769 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 13 05:55:55.960525 kubelet[2337]: E1013 05:55:55.960493 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:55.962995 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 13 05:55:55.964828 kubelet[2337]: E1013 05:55:55.964797 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:55.996050 kubelet[2337]: I1013 05:55:55.996013 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:55.996050 kubelet[2337]: I1013 05:55:55.996043 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:55.996122 kubelet[2337]: I1013 05:55:55.996066 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671522930a2e815a2f4ca1f8705b7e45-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"671522930a2e815a2f4ca1f8705b7e45\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:55.996122 kubelet[2337]: I1013 05:55:55.996085 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671522930a2e815a2f4ca1f8705b7e45-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"671522930a2e815a2f4ca1f8705b7e45\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:55.996122 kubelet[2337]: I1013 05:55:55.996101 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671522930a2e815a2f4ca1f8705b7e45-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"671522930a2e815a2f4ca1f8705b7e45\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:55.996122 kubelet[2337]: I1013 05:55:55.996115 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:55.996217 kubelet[2337]: I1013 05:55:55.996131 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:55.996217 kubelet[2337]: I1013 05:55:55.996152 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:55.996217 kubelet[2337]: I1013 05:55:55.996168 2337 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:55:56.241497 kubelet[2337]: E1013 05:55:56.241457 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:56.242019 containerd[1563]: time="2025-10-13T05:55:56.241972080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:671522930a2e815a2f4ca1f8705b7e45,Namespace:kube-system,Attempt:0,}" Oct 13 05:55:56.248496 kubelet[2337]: W1013 05:55:56.248442 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 13 05:55:56.248545 kubelet[2337]: E1013 05:55:56.248506 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:55:56.261070 kubelet[2337]: E1013 05:55:56.260955 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:56.261412 containerd[1563]: time="2025-10-13T05:55:56.261385600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 13 05:55:56.262349 containerd[1563]: time="2025-10-13T05:55:56.262310945Z" level=info msg="connecting to shim 747a1ceaadfcc093e11608add510f8dd1b1cec32793170e66affcf00736ed903" address="unix:///run/containerd/s/d35e1a5b5270aab6892f447c263a62c1b1860d89aaf9ef11f8c2d47e9d47e81b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:55:56.265648 kubelet[2337]: E1013 05:55:56.265612 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:56.266074 containerd[1563]: time="2025-10-13T05:55:56.266039877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 13 05:55:56.272122 kubelet[2337]: I1013 05:55:56.272101 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:56.272560 kubelet[2337]: E1013 05:55:56.272528 2337 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Oct 13 05:55:56.278888 kubelet[2337]: W1013 05:55:56.278835 2337 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 13 05:55:56.278938 kubelet[2337]: E1013 05:55:56.278896 2337 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError" Oct 13 05:55:56.286454 containerd[1563]: time="2025-10-13T05:55:56.286410622Z" level=info msg="connecting to shim df094ec5eda0737d422d50c762ac31b57e8ecb31d038c2ddbe9e70b32d41271d" address="unix:///run/containerd/s/9cbbddd049e1112295fa2c6d94e595a8c8804c33f2eb276fccee57dba39d8fb2" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:55:56.288493 systemd[1]: Started cri-containerd-747a1ceaadfcc093e11608add510f8dd1b1cec32793170e66affcf00736ed903.scope - libcontainer container 747a1ceaadfcc093e11608add510f8dd1b1cec32793170e66affcf00736ed903. Oct 13 05:55:56.298742 containerd[1563]: time="2025-10-13T05:55:56.298564370Z" level=info msg="connecting to shim 427c6e916213392e3cfa864f14ebd0a352b3e3e8b1f43a84a83d0ab1221935f3" address="unix:///run/containerd/s/21f390dc82f8d39df2a43652c0bef77bf57383cb8a3b3020ef0c743f05a11b7c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:55:56.319449 systemd[1]: Started cri-containerd-df094ec5eda0737d422d50c762ac31b57e8ecb31d038c2ddbe9e70b32d41271d.scope - libcontainer container df094ec5eda0737d422d50c762ac31b57e8ecb31d038c2ddbe9e70b32d41271d. Oct 13 05:55:56.323800 systemd[1]: Started cri-containerd-427c6e916213392e3cfa864f14ebd0a352b3e3e8b1f43a84a83d0ab1221935f3.scope - libcontainer container 427c6e916213392e3cfa864f14ebd0a352b3e3e8b1f43a84a83d0ab1221935f3. Oct 13 05:55:56.340578 containerd[1563]: time="2025-10-13T05:55:56.340487296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:671522930a2e815a2f4ca1f8705b7e45,Namespace:kube-system,Attempt:0,} returns sandbox id \"747a1ceaadfcc093e11608add510f8dd1b1cec32793170e66affcf00736ed903\"" Oct 13 05:55:56.343400 kubelet[2337]: E1013 05:55:56.342254 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:56.344603 containerd[1563]: time="2025-10-13T05:55:56.344578447Z" level=info msg="CreateContainer within sandbox \"747a1ceaadfcc093e11608add510f8dd1b1cec32793170e66affcf00736ed903\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:55:56.352842 containerd[1563]: time="2025-10-13T05:55:56.352814990Z" level=info msg="Container 77705fb38af5d78c34b4ea6db248986fcb50dcc41da23066da09e59fcbd9032d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:55:56.362180 containerd[1563]: time="2025-10-13T05:55:56.362147218Z" level=info msg="CreateContainer within sandbox \"747a1ceaadfcc093e11608add510f8dd1b1cec32793170e66affcf00736ed903\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"77705fb38af5d78c34b4ea6db248986fcb50dcc41da23066da09e59fcbd9032d\"" Oct 13 05:55:56.363614 containerd[1563]: time="2025-10-13T05:55:56.362886234Z" level=info msg="StartContainer for \"77705fb38af5d78c34b4ea6db248986fcb50dcc41da23066da09e59fcbd9032d\"" Oct 13 05:55:56.365103 containerd[1563]: time="2025-10-13T05:55:56.365069958Z" level=info msg="connecting to shim 77705fb38af5d78c34b4ea6db248986fcb50dcc41da23066da09e59fcbd9032d" address="unix:///run/containerd/s/d35e1a5b5270aab6892f447c263a62c1b1860d89aaf9ef11f8c2d47e9d47e81b" protocol=ttrpc version=3 Oct 13 05:55:56.371594 containerd[1563]: time="2025-10-13T05:55:56.371321530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"df094ec5eda0737d422d50c762ac31b57e8ecb31d038c2ddbe9e70b32d41271d\"" Oct 13 05:55:56.372550 kubelet[2337]: E1013 05:55:56.372489 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:56.374100 containerd[1563]: time="2025-10-13T05:55:56.374070745Z" level=info msg="CreateContainer within sandbox \"df094ec5eda0737d422d50c762ac31b57e8ecb31d038c2ddbe9e70b32d41271d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:55:56.374788 containerd[1563]: time="2025-10-13T05:55:56.374756931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"427c6e916213392e3cfa864f14ebd0a352b3e3e8b1f43a84a83d0ab1221935f3\"" Oct 13 05:55:56.375600 kubelet[2337]: E1013 05:55:56.375577 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:56.377234 containerd[1563]: time="2025-10-13T05:55:56.377210352Z" level=info msg="CreateContainer within sandbox \"427c6e916213392e3cfa864f14ebd0a352b3e3e8b1f43a84a83d0ab1221935f3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:55:56.383521 containerd[1563]: time="2025-10-13T05:55:56.383492941Z" level=info msg="Container ec2b683a77eb28c71c26187d95036415fb4fad36ca03f6c52dea8389176acd75: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:55:56.385483 systemd[1]: Started cri-containerd-77705fb38af5d78c34b4ea6db248986fcb50dcc41da23066da09e59fcbd9032d.scope - libcontainer container 77705fb38af5d78c34b4ea6db248986fcb50dcc41da23066da09e59fcbd9032d. Oct 13 05:55:56.390789 containerd[1563]: time="2025-10-13T05:55:56.390602822Z" level=info msg="Container 1fe3766dc3d6ecf6fe404ae231503ddf0497d9fac109deba4cec12687da9f580: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:55:56.394822 containerd[1563]: time="2025-10-13T05:55:56.394694935Z" level=info msg="CreateContainer within sandbox \"df094ec5eda0737d422d50c762ac31b57e8ecb31d038c2ddbe9e70b32d41271d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec2b683a77eb28c71c26187d95036415fb4fad36ca03f6c52dea8389176acd75\"" Oct 13 05:55:56.395935 containerd[1563]: time="2025-10-13T05:55:56.395911687Z" level=info msg="StartContainer for \"ec2b683a77eb28c71c26187d95036415fb4fad36ca03f6c52dea8389176acd75\"" Oct 13 05:55:56.397823 containerd[1563]: time="2025-10-13T05:55:56.397461042Z" level=info msg="connecting to shim ec2b683a77eb28c71c26187d95036415fb4fad36ca03f6c52dea8389176acd75" address="unix:///run/containerd/s/9cbbddd049e1112295fa2c6d94e595a8c8804c33f2eb276fccee57dba39d8fb2" protocol=ttrpc version=3 Oct 13 05:55:56.404297 containerd[1563]: time="2025-10-13T05:55:56.404263787Z" level=info msg="CreateContainer within sandbox \"427c6e916213392e3cfa864f14ebd0a352b3e3e8b1f43a84a83d0ab1221935f3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fe3766dc3d6ecf6fe404ae231503ddf0497d9fac109deba4cec12687da9f580\"" Oct 13 05:55:56.404995 containerd[1563]: time="2025-10-13T05:55:56.404871566Z" level=info msg="StartContainer for \"1fe3766dc3d6ecf6fe404ae231503ddf0497d9fac109deba4cec12687da9f580\"" Oct 13 05:55:56.406296 containerd[1563]: time="2025-10-13T05:55:56.406272243Z" level=info msg="connecting to shim 1fe3766dc3d6ecf6fe404ae231503ddf0497d9fac109deba4cec12687da9f580" address="unix:///run/containerd/s/21f390dc82f8d39df2a43652c0bef77bf57383cb8a3b3020ef0c743f05a11b7c" protocol=ttrpc version=3 Oct 13 05:55:56.416464 systemd[1]: Started cri-containerd-ec2b683a77eb28c71c26187d95036415fb4fad36ca03f6c52dea8389176acd75.scope - libcontainer container ec2b683a77eb28c71c26187d95036415fb4fad36ca03f6c52dea8389176acd75. Oct 13 05:55:56.424037 systemd[1]: Started cri-containerd-1fe3766dc3d6ecf6fe404ae231503ddf0497d9fac109deba4cec12687da9f580.scope - libcontainer container 1fe3766dc3d6ecf6fe404ae231503ddf0497d9fac109deba4cec12687da9f580. Oct 13 05:55:56.435587 containerd[1563]: time="2025-10-13T05:55:56.435532315Z" level=info msg="StartContainer for \"77705fb38af5d78c34b4ea6db248986fcb50dcc41da23066da09e59fcbd9032d\" returns successfully" Oct 13 05:55:56.478473 containerd[1563]: time="2025-10-13T05:55:56.478439506Z" level=info msg="StartContainer for \"1fe3766dc3d6ecf6fe404ae231503ddf0497d9fac109deba4cec12687da9f580\" returns successfully" Oct 13 05:55:56.600956 containerd[1563]: time="2025-10-13T05:55:56.600844676Z" level=info msg="StartContainer for \"ec2b683a77eb28c71c26187d95036415fb4fad36ca03f6c52dea8389176acd75\" returns successfully" Oct 13 05:55:57.074481 kubelet[2337]: I1013 05:55:57.074451 2337 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:57.221615 kubelet[2337]: E1013 05:55:57.221553 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:57.221797 kubelet[2337]: E1013 05:55:57.221675 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:57.225426 kubelet[2337]: E1013 05:55:57.225403 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:57.225495 kubelet[2337]: E1013 05:55:57.225487 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:57.226153 kubelet[2337]: E1013 05:55:57.226113 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:57.226238 kubelet[2337]: E1013 05:55:57.226202 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:57.343927 kubelet[2337]: E1013 05:55:57.343807 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:55:57.541826 kubelet[2337]: I1013 05:55:57.541780 2337 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:55:57.541826 kubelet[2337]: E1013 05:55:57.541813 2337 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:55:57.548665 kubelet[2337]: E1013 05:55:57.548628 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:57.649300 kubelet[2337]: E1013 05:55:57.649206 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:57.750209 kubelet[2337]: E1013 05:55:57.750166 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:57.850883 kubelet[2337]: E1013 05:55:57.850853 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:57.951536 kubelet[2337]: E1013 05:55:57.951451 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:58.052042 kubelet[2337]: E1013 05:55:58.052010 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:58.152578 kubelet[2337]: E1013 05:55:58.152545 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:58.227907 kubelet[2337]: E1013 05:55:58.227841 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:58.227965 kubelet[2337]: E1013 05:55:58.227954 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:58.227988 kubelet[2337]: E1013 05:55:58.227980 2337 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:58.228131 kubelet[2337]: E1013 05:55:58.228103 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:58.253203 kubelet[2337]: E1013 05:55:58.253172 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:58.355352 kubelet[2337]: E1013 05:55:58.354000 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:58.454620 kubelet[2337]: E1013 05:55:58.454572 2337 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:58.492815 kubelet[2337]: I1013 05:55:58.492780 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:58.498042 kubelet[2337]: I1013 05:55:58.498011 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:58.501141 kubelet[2337]: I1013 05:55:58.501119 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:55:59.165936 systemd[1]: Reload requested from client PID 2616 ('systemctl') (unit session-7.scope)... Oct 13 05:55:59.165951 systemd[1]: Reloading... Oct 13 05:55:59.183377 kubelet[2337]: I1013 05:55:59.183321 2337 apiserver.go:52] "Watching apiserver" Oct 13 05:55:59.185626 kubelet[2337]: E1013 05:55:59.185588 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:59.192505 kubelet[2337]: I1013 05:55:59.192481 2337 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:55:59.228164 kubelet[2337]: E1013 05:55:59.228138 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:59.228273 kubelet[2337]: I1013 05:55:59.228250 2337 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:55:59.232294 kubelet[2337]: E1013 05:55:59.232252 2337 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:55:59.233107 kubelet[2337]: E1013 05:55:59.233078 2337 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:55:59.251356 zram_generator::config[2662]: No configuration found. Oct 13 05:55:59.472504 systemd[1]: Reloading finished in 306 ms. Oct 13 05:55:59.494603 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:59.515541 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:55:59.515854 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:59.515910 systemd[1]: kubelet.service: Consumed 850ms CPU time, 134M memory peak. Oct 13 05:55:59.517640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:59.727167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:59.734700 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:55:59.776346 kubelet[2704]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:55:59.776346 kubelet[2704]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:55:59.776346 kubelet[2704]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:55:59.776731 kubelet[2704]: I1013 05:55:59.776446 2704 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:55:59.783160 kubelet[2704]: I1013 05:55:59.783120 2704 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 13 05:55:59.783160 kubelet[2704]: I1013 05:55:59.783146 2704 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:55:59.783427 kubelet[2704]: I1013 05:55:59.783404 2704 server.go:954] "Client rotation is on, will bootstrap in background" Oct 13 05:55:59.784563 kubelet[2704]: I1013 05:55:59.784534 2704 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 05:55:59.787929 kubelet[2704]: I1013 05:55:59.787899 2704 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:55:59.793467 kubelet[2704]: I1013 05:55:59.793441 2704 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:55:59.797852 kubelet[2704]: I1013 05:55:59.797824 2704 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:55:59.798058 kubelet[2704]: I1013 05:55:59.798019 2704 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:55:59.798199 kubelet[2704]: I1013 05:55:59.798045 2704 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:55:59.798199 kubelet[2704]: I1013 05:55:59.798197 2704 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:55:59.798302 kubelet[2704]: I1013 05:55:59.798205 2704 container_manager_linux.go:304] "Creating device plugin manager" Oct 13 05:55:59.798302 kubelet[2704]: I1013 05:55:59.798248 2704 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:55:59.798434 kubelet[2704]: I1013 05:55:59.798410 2704 kubelet.go:446] "Attempting to sync node with API server" Oct 13 05:55:59.798469 kubelet[2704]: I1013 05:55:59.798449 2704 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:55:59.798493 kubelet[2704]: I1013 05:55:59.798469 2704 kubelet.go:352] "Adding apiserver pod source" Oct 13 05:55:59.798493 kubelet[2704]: I1013 05:55:59.798479 2704 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:55:59.799738 kubelet[2704]: I1013 05:55:59.799709 2704 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:55:59.800057 kubelet[2704]: I1013 05:55:59.800034 2704 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 05:55:59.801899 kubelet[2704]: I1013 05:55:59.800426 2704 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:55:59.801899 kubelet[2704]: I1013 05:55:59.800456 2704 server.go:1287] "Started kubelet" Oct 13 05:55:59.801899 kubelet[2704]: I1013 05:55:59.800753 2704 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:55:59.801899 kubelet[2704]: I1013 05:55:59.801583 2704 server.go:479] "Adding debug handlers to kubelet server" Oct 13 05:55:59.801899 kubelet[2704]: I1013 05:55:59.801767 2704 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:55:59.804837 kubelet[2704]: I1013 05:55:59.804782 2704 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:55:59.805003 kubelet[2704]: I1013 05:55:59.804980 2704 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:55:59.805197 kubelet[2704]: E1013 05:55:59.805175 2704 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:59.805237 kubelet[2704]: I1013 05:55:59.805214 2704 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:55:59.808677 kubelet[2704]: I1013 05:55:59.807694 2704 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:55:59.811966 kubelet[2704]: I1013 05:55:59.811796 2704 factory.go:221] Registration of the systemd container factory successfully Oct 13 05:55:59.812136 kubelet[2704]: I1013 05:55:59.812118 2704 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:55:59.813731 kubelet[2704]: I1013 05:55:59.813664 2704 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:55:59.814173 kubelet[2704]: I1013 05:55:59.814161 2704 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:55:59.814235 kubelet[2704]: I1013 05:55:59.814189 2704 factory.go:221] Registration of the containerd container factory successfully Oct 13 05:55:59.818521 kubelet[2704]: E1013 05:55:59.818491 2704 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:55:59.821603 kubelet[2704]: I1013 05:55:59.821568 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 05:55:59.823119 kubelet[2704]: I1013 05:55:59.822732 2704 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 05:55:59.823119 kubelet[2704]: I1013 05:55:59.822769 2704 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 13 05:55:59.823119 kubelet[2704]: I1013 05:55:59.822789 2704 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:55:59.823119 kubelet[2704]: I1013 05:55:59.822797 2704 kubelet.go:2382] "Starting kubelet main sync loop" Oct 13 05:55:59.823119 kubelet[2704]: E1013 05:55:59.822843 2704 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:55:59.847692 kubelet[2704]: I1013 05:55:59.847656 2704 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:55:59.847692 kubelet[2704]: I1013 05:55:59.847680 2704 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:55:59.847692 kubelet[2704]: I1013 05:55:59.847698 2704 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:55:59.847856 kubelet[2704]: I1013 05:55:59.847814 2704 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:55:59.847856 kubelet[2704]: I1013 05:55:59.847823 2704 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:55:59.847856 kubelet[2704]: I1013 05:55:59.847840 2704 policy_none.go:49] "None policy: Start" Oct 13 05:55:59.847856 kubelet[2704]: I1013 05:55:59.847850 2704 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:55:59.847856 kubelet[2704]: I1013 05:55:59.847859 2704 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:55:59.847955 kubelet[2704]: I1013 05:55:59.847947 2704 state_mem.go:75] "Updated machine memory state" Oct 13 05:55:59.854734 kubelet[2704]: I1013 05:55:59.854473 2704 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 05:55:59.854734 kubelet[2704]: I1013 05:55:59.854625 2704 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:55:59.854734 kubelet[2704]: I1013 05:55:59.854634 2704 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:55:59.854907 kubelet[2704]: I1013 05:55:59.854881 2704 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:55:59.856110 kubelet[2704]: E1013 05:55:59.856079 2704 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:55:59.923851 kubelet[2704]: I1013 05:55:59.923819 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:55:59.924061 kubelet[2704]: I1013 05:55:59.923893 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:59.924061 kubelet[2704]: I1013 05:55:59.923931 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:59.929316 kubelet[2704]: E1013 05:55:59.929275 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:55:59.929583 kubelet[2704]: E1013 05:55:59.929552 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:59.929583 kubelet[2704]: E1013 05:55:59.929553 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:59.959278 kubelet[2704]: I1013 05:55:59.959254 2704 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:59.965171 kubelet[2704]: I1013 05:55:59.965135 2704 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:55:59.965258 kubelet[2704]: I1013 05:55:59.965202 2704 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:56:00.114741 kubelet[2704]: I1013 05:56:00.114676 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671522930a2e815a2f4ca1f8705b7e45-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"671522930a2e815a2f4ca1f8705b7e45\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:00.114741 kubelet[2704]: I1013 05:56:00.114714 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671522930a2e815a2f4ca1f8705b7e45-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"671522930a2e815a2f4ca1f8705b7e45\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:00.114874 kubelet[2704]: I1013 05:56:00.114753 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:00.114874 kubelet[2704]: I1013 05:56:00.114774 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:00.114874 kubelet[2704]: I1013 05:56:00.114791 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:00.114874 kubelet[2704]: I1013 05:56:00.114808 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:56:00.114874 kubelet[2704]: I1013 05:56:00.114823 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671522930a2e815a2f4ca1f8705b7e45-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"671522930a2e815a2f4ca1f8705b7e45\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:00.114996 kubelet[2704]: I1013 05:56:00.114837 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:00.114996 kubelet[2704]: I1013 05:56:00.114851 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:00.229809 kubelet[2704]: E1013 05:56:00.229776 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:00.229923 kubelet[2704]: E1013 05:56:00.229839 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:00.229923 kubelet[2704]: E1013 05:56:00.229855 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:00.799763 kubelet[2704]: I1013 05:56:00.799720 2704 apiserver.go:52] "Watching apiserver" Oct 13 05:56:00.814720 kubelet[2704]: I1013 05:56:00.814693 2704 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:56:00.835348 kubelet[2704]: E1013 05:56:00.835224 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:00.835348 kubelet[2704]: I1013 05:56:00.835242 2704 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:00.835348 kubelet[2704]: E1013 05:56:00.835253 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:00.839757 kubelet[2704]: E1013 05:56:00.839726 2704 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:00.839943 kubelet[2704]: E1013 05:56:00.839836 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:00.857187 kubelet[2704]: I1013 05:56:00.857132 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.857112367 podStartE2EDuration="2.857112367s" podCreationTimestamp="2025-10-13 05:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:00.851499382 +0000 UTC m=+1.113002811" watchObservedRunningTime="2025-10-13 05:56:00.857112367 +0000 UTC m=+1.118615796" Oct 13 05:56:00.862711 kubelet[2704]: I1013 05:56:00.862665 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.862650801 podStartE2EDuration="2.862650801s" podCreationTimestamp="2025-10-13 05:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:00.857240086 +0000 UTC m=+1.118743515" watchObservedRunningTime="2025-10-13 05:56:00.862650801 +0000 UTC m=+1.124154230" Oct 13 05:56:01.836574 kubelet[2704]: E1013 05:56:01.836543 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:01.836984 kubelet[2704]: E1013 05:56:01.836610 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:02.837871 kubelet[2704]: E1013 05:56:02.837815 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:03.447503 kubelet[2704]: E1013 05:56:03.447477 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:04.454498 kubelet[2704]: I1013 05:56:04.454464 2704 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:56:04.454915 containerd[1563]: time="2025-10-13T05:56:04.454777007Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:56:04.455144 kubelet[2704]: I1013 05:56:04.455083 2704 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:56:04.702004 kubelet[2704]: I1013 05:56:04.701904 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.701860378 podStartE2EDuration="6.701860378s" podCreationTimestamp="2025-10-13 05:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:00.862758714 +0000 UTC m=+1.124262143" watchObservedRunningTime="2025-10-13 05:56:04.701860378 +0000 UTC m=+4.963363807" Oct 13 05:56:04.709645 systemd[1]: Created slice kubepods-besteffort-pod53d2aa5f_818d_440b_9845_240d9f34055f.slice - libcontainer container kubepods-besteffort-pod53d2aa5f_818d_440b_9845_240d9f34055f.slice. Oct 13 05:56:04.744495 kubelet[2704]: I1013 05:56:04.744461 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53d2aa5f-818d-440b-9845-240d9f34055f-xtables-lock\") pod \"kube-proxy-764qt\" (UID: \"53d2aa5f-818d-440b-9845-240d9f34055f\") " pod="kube-system/kube-proxy-764qt" Oct 13 05:56:04.744495 kubelet[2704]: I1013 05:56:04.744493 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53d2aa5f-818d-440b-9845-240d9f34055f-lib-modules\") pod \"kube-proxy-764qt\" (UID: \"53d2aa5f-818d-440b-9845-240d9f34055f\") " pod="kube-system/kube-proxy-764qt" Oct 13 05:56:04.744662 kubelet[2704]: I1013 05:56:04.744510 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53d2aa5f-818d-440b-9845-240d9f34055f-kube-proxy\") pod \"kube-proxy-764qt\" (UID: \"53d2aa5f-818d-440b-9845-240d9f34055f\") " pod="kube-system/kube-proxy-764qt" Oct 13 05:56:04.744662 kubelet[2704]: I1013 05:56:04.744526 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lktv\" (UniqueName: \"kubernetes.io/projected/53d2aa5f-818d-440b-9845-240d9f34055f-kube-api-access-4lktv\") pod \"kube-proxy-764qt\" (UID: \"53d2aa5f-818d-440b-9845-240d9f34055f\") " pod="kube-system/kube-proxy-764qt" Oct 13 05:56:05.019539 kubelet[2704]: E1013 05:56:05.019484 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:05.020019 containerd[1563]: time="2025-10-13T05:56:05.019986166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-764qt,Uid:53d2aa5f-818d-440b-9845-240d9f34055f,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:05.044401 containerd[1563]: time="2025-10-13T05:56:05.044362512Z" level=info msg="connecting to shim 4ad34fb468bb75f517ef62365d1096b73b92bbd984921c544101814683b91aa3" address="unix:///run/containerd/s/c512689b917020d9f72302a111bdbe35305cbe3bbbe54e1822c0cafdd2910cba" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:05.076470 systemd[1]: Started cri-containerd-4ad34fb468bb75f517ef62365d1096b73b92bbd984921c544101814683b91aa3.scope - libcontainer container 4ad34fb468bb75f517ef62365d1096b73b92bbd984921c544101814683b91aa3. Oct 13 05:56:05.106928 containerd[1563]: time="2025-10-13T05:56:05.106860677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-764qt,Uid:53d2aa5f-818d-440b-9845-240d9f34055f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ad34fb468bb75f517ef62365d1096b73b92bbd984921c544101814683b91aa3\"" Oct 13 05:56:05.108106 kubelet[2704]: E1013 05:56:05.108073 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:05.110388 containerd[1563]: time="2025-10-13T05:56:05.110357023Z" level=info msg="CreateContainer within sandbox \"4ad34fb468bb75f517ef62365d1096b73b92bbd984921c544101814683b91aa3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:56:05.122016 containerd[1563]: time="2025-10-13T05:56:05.121969704Z" level=info msg="Container c2eccf92e9c98de4e52b2297dcfb1781d5b071542a9843c20fd3e2a4bd80b5f5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:05.131448 containerd[1563]: time="2025-10-13T05:56:05.131408482Z" level=info msg="CreateContainer within sandbox \"4ad34fb468bb75f517ef62365d1096b73b92bbd984921c544101814683b91aa3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c2eccf92e9c98de4e52b2297dcfb1781d5b071542a9843c20fd3e2a4bd80b5f5\"" Oct 13 05:56:05.133348 containerd[1563]: time="2025-10-13T05:56:05.131851094Z" level=info msg="StartContainer for \"c2eccf92e9c98de4e52b2297dcfb1781d5b071542a9843c20fd3e2a4bd80b5f5\"" Oct 13 05:56:05.133348 containerd[1563]: time="2025-10-13T05:56:05.133066952Z" level=info msg="connecting to shim c2eccf92e9c98de4e52b2297dcfb1781d5b071542a9843c20fd3e2a4bd80b5f5" address="unix:///run/containerd/s/c512689b917020d9f72302a111bdbe35305cbe3bbbe54e1822c0cafdd2910cba" protocol=ttrpc version=3 Oct 13 05:56:05.157585 systemd[1]: Started cri-containerd-c2eccf92e9c98de4e52b2297dcfb1781d5b071542a9843c20fd3e2a4bd80b5f5.scope - libcontainer container c2eccf92e9c98de4e52b2297dcfb1781d5b071542a9843c20fd3e2a4bd80b5f5. Oct 13 05:56:05.197063 containerd[1563]: time="2025-10-13T05:56:05.197021199Z" level=info msg="StartContainer for \"c2eccf92e9c98de4e52b2297dcfb1781d5b071542a9843c20fd3e2a4bd80b5f5\" returns successfully" Oct 13 05:56:05.585907 systemd[1]: Created slice kubepods-besteffort-pod15e3c714_ad87_46aa_95c6_52080bf50706.slice - libcontainer container kubepods-besteffort-pod15e3c714_ad87_46aa_95c6_52080bf50706.slice. Oct 13 05:56:05.649648 kubelet[2704]: I1013 05:56:05.649619 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15e3c714-ad87-46aa-95c6-52080bf50706-var-lib-calico\") pod \"tigera-operator-755d956888-j2r89\" (UID: \"15e3c714-ad87-46aa-95c6-52080bf50706\") " pod="tigera-operator/tigera-operator-755d956888-j2r89" Oct 13 05:56:05.649976 kubelet[2704]: I1013 05:56:05.649656 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkwfl\" (UniqueName: \"kubernetes.io/projected/15e3c714-ad87-46aa-95c6-52080bf50706-kube-api-access-hkwfl\") pod \"tigera-operator-755d956888-j2r89\" (UID: \"15e3c714-ad87-46aa-95c6-52080bf50706\") " pod="tigera-operator/tigera-operator-755d956888-j2r89" Oct 13 05:56:05.843081 kubelet[2704]: E1013 05:56:05.842975 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:05.850356 kubelet[2704]: I1013 05:56:05.850187 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-764qt" podStartSLOduration=1.8501688299999999 podStartE2EDuration="1.85016883s" podCreationTimestamp="2025-10-13 05:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:05.849973936 +0000 UTC m=+6.111477365" watchObservedRunningTime="2025-10-13 05:56:05.85016883 +0000 UTC m=+6.111672260" Oct 13 05:56:05.889163 containerd[1563]: time="2025-10-13T05:56:05.889120506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j2r89,Uid:15e3c714-ad87-46aa-95c6-52080bf50706,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:56:05.910138 containerd[1563]: time="2025-10-13T05:56:05.910099466Z" level=info msg="connecting to shim e93bfdb8d0075c3dcac17dba5ee8cd40dcc42b6d3a6e74f9ddcf920e6126c9ce" address="unix:///run/containerd/s/0c407ae1ac22e1a830a993dc6abb5f9a2c06c190dca2987fb296b874bf9bcee0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:05.931451 systemd[1]: Started cri-containerd-e93bfdb8d0075c3dcac17dba5ee8cd40dcc42b6d3a6e74f9ddcf920e6126c9ce.scope - libcontainer container e93bfdb8d0075c3dcac17dba5ee8cd40dcc42b6d3a6e74f9ddcf920e6126c9ce. Oct 13 05:56:05.970085 containerd[1563]: time="2025-10-13T05:56:05.970051734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j2r89,Uid:15e3c714-ad87-46aa-95c6-52080bf50706,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e93bfdb8d0075c3dcac17dba5ee8cd40dcc42b6d3a6e74f9ddcf920e6126c9ce\"" Oct 13 05:56:05.971660 containerd[1563]: time="2025-10-13T05:56:05.971623186Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:56:07.361835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873155024.mount: Deactivated successfully. Oct 13 05:56:07.680067 containerd[1563]: time="2025-10-13T05:56:07.679954044Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:07.680745 containerd[1563]: time="2025-10-13T05:56:07.680713421Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:56:07.681836 containerd[1563]: time="2025-10-13T05:56:07.681792610Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:07.683801 kubelet[2704]: E1013 05:56:07.683755 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:07.685111 containerd[1563]: time="2025-10-13T05:56:07.683910482Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:07.685111 containerd[1563]: time="2025-10-13T05:56:07.684268959Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.712595676s" Oct 13 05:56:07.685111 containerd[1563]: time="2025-10-13T05:56:07.684311070Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:56:07.686945 containerd[1563]: time="2025-10-13T05:56:07.686864107Z" level=info msg="CreateContainer within sandbox \"e93bfdb8d0075c3dcac17dba5ee8cd40dcc42b6d3a6e74f9ddcf920e6126c9ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:56:07.695844 containerd[1563]: time="2025-10-13T05:56:07.695790417Z" level=info msg="Container 510cf7713bdb4a8f67011661e06119098fc7dbf723f761017ece7e9713e39db0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:07.699527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033726749.mount: Deactivated successfully. Oct 13 05:56:07.702751 containerd[1563]: time="2025-10-13T05:56:07.702713704Z" level=info msg="CreateContainer within sandbox \"e93bfdb8d0075c3dcac17dba5ee8cd40dcc42b6d3a6e74f9ddcf920e6126c9ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"510cf7713bdb4a8f67011661e06119098fc7dbf723f761017ece7e9713e39db0\"" Oct 13 05:56:07.703137 containerd[1563]: time="2025-10-13T05:56:07.703110545Z" level=info msg="StartContainer for \"510cf7713bdb4a8f67011661e06119098fc7dbf723f761017ece7e9713e39db0\"" Oct 13 05:56:07.703887 containerd[1563]: time="2025-10-13T05:56:07.703851066Z" level=info msg="connecting to shim 510cf7713bdb4a8f67011661e06119098fc7dbf723f761017ece7e9713e39db0" address="unix:///run/containerd/s/0c407ae1ac22e1a830a993dc6abb5f9a2c06c190dca2987fb296b874bf9bcee0" protocol=ttrpc version=3 Oct 13 05:56:07.757476 systemd[1]: Started cri-containerd-510cf7713bdb4a8f67011661e06119098fc7dbf723f761017ece7e9713e39db0.scope - libcontainer container 510cf7713bdb4a8f67011661e06119098fc7dbf723f761017ece7e9713e39db0. Oct 13 05:56:07.785759 containerd[1563]: time="2025-10-13T05:56:07.785727767Z" level=info msg="StartContainer for \"510cf7713bdb4a8f67011661e06119098fc7dbf723f761017ece7e9713e39db0\" returns successfully" Oct 13 05:56:07.847146 kubelet[2704]: E1013 05:56:07.847081 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:07.861150 kubelet[2704]: I1013 05:56:07.861087 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-j2r89" podStartSLOduration=1.1466741950000001 podStartE2EDuration="2.861072161s" podCreationTimestamp="2025-10-13 05:56:05 +0000 UTC" firstStartedPulling="2025-10-13 05:56:05.971152471 +0000 UTC m=+6.232655890" lastFinishedPulling="2025-10-13 05:56:07.685550437 +0000 UTC m=+7.947053856" observedRunningTime="2025-10-13 05:56:07.860875683 +0000 UTC m=+8.122379112" watchObservedRunningTime="2025-10-13 05:56:07.861072161 +0000 UTC m=+8.122575590" Oct 13 05:56:11.876137 kubelet[2704]: E1013 05:56:11.876101 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:12.790165 sudo[1778]: pam_unix(sudo:session): session closed for user root Oct 13 05:56:12.792114 sshd[1777]: Connection closed by 10.0.0.1 port 45964 Oct 13 05:56:12.792826 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:12.855387 kubelet[2704]: E1013 05:56:12.855352 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:12.963525 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:45964.service: Deactivated successfully. Oct 13 05:56:12.968318 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:56:12.968588 systemd[1]: session-7.scope: Consumed 4.118s CPU time, 227.8M memory peak. Oct 13 05:56:12.971665 systemd-logind[1545]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:56:12.974420 systemd-logind[1545]: Removed session 7. Oct 13 05:56:13.455254 kubelet[2704]: E1013 05:56:13.455219 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:15.182912 systemd[1]: Created slice kubepods-besteffort-podba72624d_0f98_4fe2_b1f2_2bbf4a555059.slice - libcontainer container kubepods-besteffort-podba72624d_0f98_4fe2_b1f2_2bbf4a555059.slice. Oct 13 05:56:15.217461 kubelet[2704]: I1013 05:56:15.217417 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ba72624d-0f98-4fe2-b1f2-2bbf4a555059-typha-certs\") pod \"calico-typha-54db5cd7d4-s2htf\" (UID: \"ba72624d-0f98-4fe2-b1f2-2bbf4a555059\") " pod="calico-system/calico-typha-54db5cd7d4-s2htf" Oct 13 05:56:15.217461 kubelet[2704]: I1013 05:56:15.217458 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba72624d-0f98-4fe2-b1f2-2bbf4a555059-tigera-ca-bundle\") pod \"calico-typha-54db5cd7d4-s2htf\" (UID: \"ba72624d-0f98-4fe2-b1f2-2bbf4a555059\") " pod="calico-system/calico-typha-54db5cd7d4-s2htf" Oct 13 05:56:15.217894 kubelet[2704]: I1013 05:56:15.217480 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6g2\" (UniqueName: \"kubernetes.io/projected/ba72624d-0f98-4fe2-b1f2-2bbf4a555059-kube-api-access-2j6g2\") pod \"calico-typha-54db5cd7d4-s2htf\" (UID: \"ba72624d-0f98-4fe2-b1f2-2bbf4a555059\") " pod="calico-system/calico-typha-54db5cd7d4-s2htf" Oct 13 05:56:15.489135 kubelet[2704]: E1013 05:56:15.489099 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:15.489652 containerd[1563]: time="2025-10-13T05:56:15.489601539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54db5cd7d4-s2htf,Uid:ba72624d-0f98-4fe2-b1f2-2bbf4a555059,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:15.593931 systemd[1]: Created slice kubepods-besteffort-pod064d39dd_410c_47a6_9b3b_a4dc75b05f97.slice - libcontainer container kubepods-besteffort-pod064d39dd_410c_47a6_9b3b_a4dc75b05f97.slice. Oct 13 05:56:15.607391 containerd[1563]: time="2025-10-13T05:56:15.607238800Z" level=info msg="connecting to shim 175cda1ce93b81ca08bf33b27aa4d8ad06a1a233d2e392190b4b8091986f0726" address="unix:///run/containerd/s/364ed76887e2ab4ef28da55160937661a13465e2ccd275881a64de027db392ce" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:15.619178 kubelet[2704]: I1013 05:56:15.619142 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-cni-bin-dir\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619244 kubelet[2704]: I1013 05:56:15.619181 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/064d39dd-410c-47a6-9b3b-a4dc75b05f97-node-certs\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619244 kubelet[2704]: I1013 05:56:15.619200 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-var-run-calico\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619244 kubelet[2704]: I1013 05:56:15.619216 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjrh\" (UniqueName: \"kubernetes.io/projected/064d39dd-410c-47a6-9b3b-a4dc75b05f97-kube-api-access-csjrh\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619244 kubelet[2704]: I1013 05:56:15.619235 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-flexvol-driver-host\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619371 kubelet[2704]: I1013 05:56:15.619250 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-cni-log-dir\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619371 kubelet[2704]: I1013 05:56:15.619267 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-policysync\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619371 kubelet[2704]: I1013 05:56:15.619282 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/064d39dd-410c-47a6-9b3b-a4dc75b05f97-tigera-ca-bundle\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619371 kubelet[2704]: I1013 05:56:15.619299 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-cni-net-dir\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619371 kubelet[2704]: I1013 05:56:15.619313 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-lib-modules\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619485 kubelet[2704]: I1013 05:56:15.619428 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-var-lib-calico\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.619560 kubelet[2704]: I1013 05:56:15.619541 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/064d39dd-410c-47a6-9b3b-a4dc75b05f97-xtables-lock\") pod \"calico-node-hrjll\" (UID: \"064d39dd-410c-47a6-9b3b-a4dc75b05f97\") " pod="calico-system/calico-node-hrjll" Oct 13 05:56:15.638460 systemd[1]: Started cri-containerd-175cda1ce93b81ca08bf33b27aa4d8ad06a1a233d2e392190b4b8091986f0726.scope - libcontainer container 175cda1ce93b81ca08bf33b27aa4d8ad06a1a233d2e392190b4b8091986f0726. Oct 13 05:56:15.703146 containerd[1563]: time="2025-10-13T05:56:15.703096767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54db5cd7d4-s2htf,Uid:ba72624d-0f98-4fe2-b1f2-2bbf4a555059,Namespace:calico-system,Attempt:0,} returns sandbox id \"175cda1ce93b81ca08bf33b27aa4d8ad06a1a233d2e392190b4b8091986f0726\"" Oct 13 05:56:15.703916 kubelet[2704]: E1013 05:56:15.703892 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:15.704986 containerd[1563]: time="2025-10-13T05:56:15.704580277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:56:15.722407 kubelet[2704]: E1013 05:56:15.722357 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.722407 kubelet[2704]: W1013 05:56:15.722378 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.722407 kubelet[2704]: E1013 05:56:15.722415 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.726348 kubelet[2704]: E1013 05:56:15.725776 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.726348 kubelet[2704]: W1013 05:56:15.725794 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.726348 kubelet[2704]: E1013 05:56:15.725810 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.728154 kubelet[2704]: E1013 05:56:15.728126 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.728154 kubelet[2704]: W1013 05:56:15.728140 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.728154 kubelet[2704]: E1013 05:56:15.728150 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.761277 kubelet[2704]: E1013 05:56:15.761096 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q48s" podUID="5abc9329-79bf-4376-a56b-5b3a9919ac87" Oct 13 05:56:15.804112 kubelet[2704]: E1013 05:56:15.804083 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.804112 kubelet[2704]: W1013 05:56:15.804103 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.804112 kubelet[2704]: E1013 05:56:15.804123 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.804300 kubelet[2704]: E1013 05:56:15.804286 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.804300 kubelet[2704]: W1013 05:56:15.804295 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.804401 kubelet[2704]: E1013 05:56:15.804302 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.804494 kubelet[2704]: E1013 05:56:15.804478 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.804494 kubelet[2704]: W1013 05:56:15.804488 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.804543 kubelet[2704]: E1013 05:56:15.804495 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.804720 kubelet[2704]: E1013 05:56:15.804698 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.804720 kubelet[2704]: W1013 05:56:15.804709 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.804720 kubelet[2704]: E1013 05:56:15.804716 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.804882 kubelet[2704]: E1013 05:56:15.804868 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.804882 kubelet[2704]: W1013 05:56:15.804877 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.804929 kubelet[2704]: E1013 05:56:15.804885 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.805042 kubelet[2704]: E1013 05:56:15.805028 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.805042 kubelet[2704]: W1013 05:56:15.805037 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.805096 kubelet[2704]: E1013 05:56:15.805044 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.805207 kubelet[2704]: E1013 05:56:15.805192 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.805207 kubelet[2704]: W1013 05:56:15.805201 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.805251 kubelet[2704]: E1013 05:56:15.805211 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.805388 kubelet[2704]: E1013 05:56:15.805371 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.805388 kubelet[2704]: W1013 05:56:15.805380 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.805388 kubelet[2704]: E1013 05:56:15.805387 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.805556 kubelet[2704]: E1013 05:56:15.805540 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.805556 kubelet[2704]: W1013 05:56:15.805550 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.805556 kubelet[2704]: E1013 05:56:15.805557 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.805742 kubelet[2704]: E1013 05:56:15.805724 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.805742 kubelet[2704]: W1013 05:56:15.805734 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.805742 kubelet[2704]: E1013 05:56:15.805741 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.805907 kubelet[2704]: E1013 05:56:15.805890 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.805907 kubelet[2704]: W1013 05:56:15.805899 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.805907 kubelet[2704]: E1013 05:56:15.805906 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.806075 kubelet[2704]: E1013 05:56:15.806055 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.806075 kubelet[2704]: W1013 05:56:15.806064 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.806075 kubelet[2704]: E1013 05:56:15.806071 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.806263 kubelet[2704]: E1013 05:56:15.806224 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.806263 kubelet[2704]: W1013 05:56:15.806235 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.806263 kubelet[2704]: E1013 05:56:15.806242 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.806482 kubelet[2704]: E1013 05:56:15.806456 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.806482 kubelet[2704]: W1013 05:56:15.806464 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.806482 kubelet[2704]: E1013 05:56:15.806473 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.806676 kubelet[2704]: E1013 05:56:15.806633 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.806676 kubelet[2704]: W1013 05:56:15.806666 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.806676 kubelet[2704]: E1013 05:56:15.806676 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.806859 kubelet[2704]: E1013 05:56:15.806844 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.806859 kubelet[2704]: W1013 05:56:15.806854 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.806930 kubelet[2704]: E1013 05:56:15.806861 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.807058 kubelet[2704]: E1013 05:56:15.807034 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.807058 kubelet[2704]: W1013 05:56:15.807044 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.807058 kubelet[2704]: E1013 05:56:15.807053 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.807236 kubelet[2704]: E1013 05:56:15.807218 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.807236 kubelet[2704]: W1013 05:56:15.807229 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.807293 kubelet[2704]: E1013 05:56:15.807236 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.807518 kubelet[2704]: E1013 05:56:15.807483 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.807518 kubelet[2704]: W1013 05:56:15.807508 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.807677 kubelet[2704]: E1013 05:56:15.807535 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.807773 kubelet[2704]: E1013 05:56:15.807759 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.807773 kubelet[2704]: W1013 05:56:15.807769 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.807838 kubelet[2704]: E1013 05:56:15.807780 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.821639 kubelet[2704]: E1013 05:56:15.821617 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.821639 kubelet[2704]: W1013 05:56:15.821630 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.821639 kubelet[2704]: E1013 05:56:15.821648 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.821738 kubelet[2704]: I1013 05:56:15.821680 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5abc9329-79bf-4376-a56b-5b3a9919ac87-kubelet-dir\") pod \"csi-node-driver-6q48s\" (UID: \"5abc9329-79bf-4376-a56b-5b3a9919ac87\") " pod="calico-system/csi-node-driver-6q48s" Oct 13 05:56:15.821890 kubelet[2704]: E1013 05:56:15.821861 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.821890 kubelet[2704]: W1013 05:56:15.821873 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.821890 kubelet[2704]: E1013 05:56:15.821886 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.821890 kubelet[2704]: I1013 05:56:15.821899 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4vq\" (UniqueName: \"kubernetes.io/projected/5abc9329-79bf-4376-a56b-5b3a9919ac87-kube-api-access-7s4vq\") pod \"csi-node-driver-6q48s\" (UID: \"5abc9329-79bf-4376-a56b-5b3a9919ac87\") " pod="calico-system/csi-node-driver-6q48s" Oct 13 05:56:15.822175 kubelet[2704]: E1013 05:56:15.822150 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.822175 kubelet[2704]: W1013 05:56:15.822170 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.822230 kubelet[2704]: E1013 05:56:15.822194 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.822436 kubelet[2704]: E1013 05:56:15.822392 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.822436 kubelet[2704]: W1013 05:56:15.822412 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.822436 kubelet[2704]: E1013 05:56:15.822425 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.822649 kubelet[2704]: E1013 05:56:15.822609 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.822649 kubelet[2704]: W1013 05:56:15.822625 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.822649 kubelet[2704]: E1013 05:56:15.822649 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.822735 kubelet[2704]: I1013 05:56:15.822679 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5abc9329-79bf-4376-a56b-5b3a9919ac87-socket-dir\") pod \"csi-node-driver-6q48s\" (UID: \"5abc9329-79bf-4376-a56b-5b3a9919ac87\") " pod="calico-system/csi-node-driver-6q48s" Oct 13 05:56:15.822876 kubelet[2704]: E1013 05:56:15.822858 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.822876 kubelet[2704]: W1013 05:56:15.822868 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.822925 kubelet[2704]: E1013 05:56:15.822882 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.823063 kubelet[2704]: E1013 05:56:15.823048 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.823063 kubelet[2704]: W1013 05:56:15.823057 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.823114 kubelet[2704]: E1013 05:56:15.823069 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.823267 kubelet[2704]: E1013 05:56:15.823248 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.823267 kubelet[2704]: W1013 05:56:15.823261 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.823315 kubelet[2704]: E1013 05:56:15.823275 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.823315 kubelet[2704]: I1013 05:56:15.823296 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5abc9329-79bf-4376-a56b-5b3a9919ac87-registration-dir\") pod \"csi-node-driver-6q48s\" (UID: \"5abc9329-79bf-4376-a56b-5b3a9919ac87\") " pod="calico-system/csi-node-driver-6q48s" Oct 13 05:56:15.823547 kubelet[2704]: E1013 05:56:15.823522 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.823547 kubelet[2704]: W1013 05:56:15.823542 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.823629 kubelet[2704]: E1013 05:56:15.823564 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.823629 kubelet[2704]: I1013 05:56:15.823591 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5abc9329-79bf-4376-a56b-5b3a9919ac87-varrun\") pod \"csi-node-driver-6q48s\" (UID: \"5abc9329-79bf-4376-a56b-5b3a9919ac87\") " pod="calico-system/csi-node-driver-6q48s" Oct 13 05:56:15.823861 kubelet[2704]: E1013 05:56:15.823841 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.823861 kubelet[2704]: W1013 05:56:15.823857 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.824005 kubelet[2704]: E1013 05:56:15.823988 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.824062 kubelet[2704]: E1013 05:56:15.824049 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.824062 kubelet[2704]: W1013 05:56:15.824058 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.824126 kubelet[2704]: E1013 05:56:15.824112 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.824523 kubelet[2704]: E1013 05:56:15.824255 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.824523 kubelet[2704]: W1013 05:56:15.824267 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.824523 kubelet[2704]: E1013 05:56:15.824276 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.824619 kubelet[2704]: E1013 05:56:15.824582 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.824619 kubelet[2704]: W1013 05:56:15.824592 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.824682 kubelet[2704]: E1013 05:56:15.824627 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.824870 kubelet[2704]: E1013 05:56:15.824853 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.824870 kubelet[2704]: W1013 05:56:15.824865 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.824930 kubelet[2704]: E1013 05:56:15.824901 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.825099 kubelet[2704]: E1013 05:56:15.825083 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.825099 kubelet[2704]: W1013 05:56:15.825093 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.825156 kubelet[2704]: E1013 05:56:15.825103 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.898024 containerd[1563]: time="2025-10-13T05:56:15.897984030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hrjll,Uid:064d39dd-410c-47a6-9b3b-a4dc75b05f97,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:15.923484 containerd[1563]: time="2025-10-13T05:56:15.923439641Z" level=info msg="connecting to shim c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e" address="unix:///run/containerd/s/65dd7b6fcc5d0c11aef9247e7b8321b54c1f57df7780daa56e283fb8c9129257" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:15.924658 kubelet[2704]: E1013 05:56:15.924616 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.924658 kubelet[2704]: W1013 05:56:15.924637 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.924658 kubelet[2704]: E1013 05:56:15.924668 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.925051 kubelet[2704]: E1013 05:56:15.925025 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.925051 kubelet[2704]: W1013 05:56:15.925038 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.925229 kubelet[2704]: E1013 05:56:15.925067 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.925438 kubelet[2704]: E1013 05:56:15.925407 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.925472 kubelet[2704]: W1013 05:56:15.925449 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.925472 kubelet[2704]: E1013 05:56:15.925468 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.925991 kubelet[2704]: E1013 05:56:15.925970 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.925991 kubelet[2704]: W1013 05:56:15.925984 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.926070 kubelet[2704]: E1013 05:56:15.926001 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.926291 kubelet[2704]: E1013 05:56:15.926251 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.926326 kubelet[2704]: W1013 05:56:15.926291 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.926326 kubelet[2704]: E1013 05:56:15.926306 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.926764 kubelet[2704]: E1013 05:56:15.926731 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.926764 kubelet[2704]: W1013 05:56:15.926743 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.926764 kubelet[2704]: E1013 05:56:15.926756 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.927251 kubelet[2704]: E1013 05:56:15.927233 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.927251 kubelet[2704]: W1013 05:56:15.927245 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.927361 kubelet[2704]: E1013 05:56:15.927260 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.927625 kubelet[2704]: E1013 05:56:15.927610 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.927625 kubelet[2704]: W1013 05:56:15.927620 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.927882 kubelet[2704]: E1013 05:56:15.927630 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.928029 kubelet[2704]: E1013 05:56:15.928007 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.928029 kubelet[2704]: W1013 05:56:15.928023 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.928254 kubelet[2704]: E1013 05:56:15.928092 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.928254 kubelet[2704]: E1013 05:56:15.928212 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.928254 kubelet[2704]: W1013 05:56:15.928221 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.928254 kubelet[2704]: E1013 05:56:15.928254 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.929157 kubelet[2704]: E1013 05:56:15.929131 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.929157 kubelet[2704]: W1013 05:56:15.929158 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.929157 kubelet[2704]: E1013 05:56:15.929209 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.929479 kubelet[2704]: E1013 05:56:15.929406 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.929479 kubelet[2704]: W1013 05:56:15.929414 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.929479 kubelet[2704]: E1013 05:56:15.929440 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.929974 kubelet[2704]: E1013 05:56:15.929948 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.930020 kubelet[2704]: W1013 05:56:15.929975 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.930020 kubelet[2704]: E1013 05:56:15.930007 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.930589 kubelet[2704]: E1013 05:56:15.930567 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.930589 kubelet[2704]: W1013 05:56:15.930582 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.930669 kubelet[2704]: E1013 05:56:15.930596 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.930945 kubelet[2704]: E1013 05:56:15.930857 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.930945 kubelet[2704]: W1013 05:56:15.930870 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.932457 kubelet[2704]: E1013 05:56:15.932418 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.932550 kubelet[2704]: E1013 05:56:15.932528 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.932550 kubelet[2704]: W1013 05:56:15.932542 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.932623 kubelet[2704]: E1013 05:56:15.932574 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.932812 kubelet[2704]: E1013 05:56:15.932767 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.932812 kubelet[2704]: W1013 05:56:15.932780 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.932864 kubelet[2704]: E1013 05:56:15.932805 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.933090 kubelet[2704]: E1013 05:56:15.932939 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.933090 kubelet[2704]: W1013 05:56:15.932951 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.933090 kubelet[2704]: E1013 05:56:15.933010 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.933175 kubelet[2704]: E1013 05:56:15.933096 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.933175 kubelet[2704]: W1013 05:56:15.933103 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.933175 kubelet[2704]: E1013 05:56:15.933120 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.933344 kubelet[2704]: E1013 05:56:15.933300 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.933344 kubelet[2704]: W1013 05:56:15.933312 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.933344 kubelet[2704]: E1013 05:56:15.933324 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.934583 kubelet[2704]: E1013 05:56:15.934560 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.934583 kubelet[2704]: W1013 05:56:15.934575 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.934649 kubelet[2704]: E1013 05:56:15.934590 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.934830 kubelet[2704]: E1013 05:56:15.934808 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.934830 kubelet[2704]: W1013 05:56:15.934823 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.934891 kubelet[2704]: E1013 05:56:15.934846 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.935065 kubelet[2704]: E1013 05:56:15.935045 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.935065 kubelet[2704]: W1013 05:56:15.935061 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.935113 kubelet[2704]: E1013 05:56:15.935074 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.935290 kubelet[2704]: E1013 05:56:15.935271 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.935290 kubelet[2704]: W1013 05:56:15.935284 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.935429 kubelet[2704]: E1013 05:56:15.935295 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.935505 kubelet[2704]: E1013 05:56:15.935486 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.935505 kubelet[2704]: W1013 05:56:15.935499 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.935557 kubelet[2704]: E1013 05:56:15.935507 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.943082 kubelet[2704]: E1013 05:56:15.943056 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:15.943082 kubelet[2704]: W1013 05:56:15.943073 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:15.943082 kubelet[2704]: E1013 05:56:15.943082 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:15.968465 systemd[1]: Started cri-containerd-c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e.scope - libcontainer container c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e. Oct 13 05:56:15.994479 containerd[1563]: time="2025-10-13T05:56:15.994427260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hrjll,Uid:064d39dd-410c-47a6-9b3b-a4dc75b05f97,Namespace:calico-system,Attempt:0,} returns sandbox id \"c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e\"" Oct 13 05:56:17.449863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245304947.mount: Deactivated successfully. Oct 13 05:56:17.823989 kubelet[2704]: E1013 05:56:17.823955 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q48s" podUID="5abc9329-79bf-4376-a56b-5b3a9919ac87" Oct 13 05:56:18.689451 update_engine[1551]: I20251013 05:56:18.689378 1551 update_attempter.cc:509] Updating boot flags... Oct 13 05:56:19.226666 containerd[1563]: time="2025-10-13T05:56:19.226608587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:19.227389 containerd[1563]: time="2025-10-13T05:56:19.227368437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:56:19.228621 containerd[1563]: time="2025-10-13T05:56:19.228591775Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:19.230607 containerd[1563]: time="2025-10-13T05:56:19.230569903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:19.231078 containerd[1563]: time="2025-10-13T05:56:19.231047969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.526438156s" Oct 13 05:56:19.231117 containerd[1563]: time="2025-10-13T05:56:19.231082475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:56:19.231850 containerd[1563]: time="2025-10-13T05:56:19.231753185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:56:19.240835 containerd[1563]: time="2025-10-13T05:56:19.240304638Z" level=info msg="CreateContainer within sandbox \"175cda1ce93b81ca08bf33b27aa4d8ad06a1a233d2e392190b4b8091986f0726\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:56:19.249597 containerd[1563]: time="2025-10-13T05:56:19.249555787Z" level=info msg="Container 59f285bb60a19ab9b6283f8cbb66ba7727b21907313ffd8fba139a06e1830293: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:19.258148 containerd[1563]: time="2025-10-13T05:56:19.258110917Z" level=info msg="CreateContainer within sandbox \"175cda1ce93b81ca08bf33b27aa4d8ad06a1a233d2e392190b4b8091986f0726\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"59f285bb60a19ab9b6283f8cbb66ba7727b21907313ffd8fba139a06e1830293\"" Oct 13 05:56:19.258537 containerd[1563]: time="2025-10-13T05:56:19.258488553Z" level=info msg="StartContainer for \"59f285bb60a19ab9b6283f8cbb66ba7727b21907313ffd8fba139a06e1830293\"" Oct 13 05:56:19.259497 containerd[1563]: time="2025-10-13T05:56:19.259474581Z" level=info msg="connecting to shim 59f285bb60a19ab9b6283f8cbb66ba7727b21907313ffd8fba139a06e1830293" address="unix:///run/containerd/s/364ed76887e2ab4ef28da55160937661a13465e2ccd275881a64de027db392ce" protocol=ttrpc version=3 Oct 13 05:56:19.280450 systemd[1]: Started cri-containerd-59f285bb60a19ab9b6283f8cbb66ba7727b21907313ffd8fba139a06e1830293.scope - libcontainer container 59f285bb60a19ab9b6283f8cbb66ba7727b21907313ffd8fba139a06e1830293. Oct 13 05:56:19.324755 containerd[1563]: time="2025-10-13T05:56:19.324681183Z" level=info msg="StartContainer for \"59f285bb60a19ab9b6283f8cbb66ba7727b21907313ffd8fba139a06e1830293\" returns successfully" Oct 13 05:56:19.823218 kubelet[2704]: E1013 05:56:19.823174 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q48s" podUID="5abc9329-79bf-4376-a56b-5b3a9919ac87" Oct 13 05:56:19.872489 kubelet[2704]: E1013 05:56:19.872456 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:19.935522 kubelet[2704]: E1013 05:56:19.935477 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.935522 kubelet[2704]: W1013 05:56:19.935497 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.935522 kubelet[2704]: E1013 05:56:19.935516 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.935747 kubelet[2704]: E1013 05:56:19.935719 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.935747 kubelet[2704]: W1013 05:56:19.935739 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.935747 kubelet[2704]: E1013 05:56:19.935749 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.935935 kubelet[2704]: E1013 05:56:19.935909 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.935935 kubelet[2704]: W1013 05:56:19.935920 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.935935 kubelet[2704]: E1013 05:56:19.935927 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.936173 kubelet[2704]: E1013 05:56:19.936150 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.936173 kubelet[2704]: W1013 05:56:19.936169 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.936250 kubelet[2704]: E1013 05:56:19.936190 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.936424 kubelet[2704]: E1013 05:56:19.936407 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.936424 kubelet[2704]: W1013 05:56:19.936418 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.936485 kubelet[2704]: E1013 05:56:19.936426 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.936639 kubelet[2704]: E1013 05:56:19.936616 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.936639 kubelet[2704]: W1013 05:56:19.936635 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.936710 kubelet[2704]: E1013 05:56:19.936647 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.936851 kubelet[2704]: E1013 05:56:19.936830 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.936851 kubelet[2704]: W1013 05:56:19.936840 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.936851 kubelet[2704]: E1013 05:56:19.936847 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.937048 kubelet[2704]: E1013 05:56:19.937025 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.937048 kubelet[2704]: W1013 05:56:19.937036 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.937048 kubelet[2704]: E1013 05:56:19.937044 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.937251 kubelet[2704]: E1013 05:56:19.937234 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.937251 kubelet[2704]: W1013 05:56:19.937246 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.937299 kubelet[2704]: E1013 05:56:19.937257 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.937437 kubelet[2704]: E1013 05:56:19.937423 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.937437 kubelet[2704]: W1013 05:56:19.937434 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.937490 kubelet[2704]: E1013 05:56:19.937442 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.937598 kubelet[2704]: E1013 05:56:19.937584 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.937598 kubelet[2704]: W1013 05:56:19.937594 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.937653 kubelet[2704]: E1013 05:56:19.937601 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.937771 kubelet[2704]: E1013 05:56:19.937755 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.937771 kubelet[2704]: W1013 05:56:19.937765 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.937820 kubelet[2704]: E1013 05:56:19.937774 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.937935 kubelet[2704]: E1013 05:56:19.937921 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.937935 kubelet[2704]: W1013 05:56:19.937930 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.938004 kubelet[2704]: E1013 05:56:19.937937 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.938176 kubelet[2704]: E1013 05:56:19.938145 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.938176 kubelet[2704]: W1013 05:56:19.938164 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.938420 kubelet[2704]: E1013 05:56:19.938401 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.938661 kubelet[2704]: E1013 05:56:19.938616 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.938661 kubelet[2704]: W1013 05:56:19.938647 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.938661 kubelet[2704]: E1013 05:56:19.938657 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.961945 kubelet[2704]: E1013 05:56:19.961915 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.961945 kubelet[2704]: W1013 05:56:19.961929 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.961945 kubelet[2704]: E1013 05:56:19.961939 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.962182 kubelet[2704]: E1013 05:56:19.962156 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.962182 kubelet[2704]: W1013 05:56:19.962170 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.962261 kubelet[2704]: E1013 05:56:19.962191 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.962432 kubelet[2704]: E1013 05:56:19.962415 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.962432 kubelet[2704]: W1013 05:56:19.962427 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.962505 kubelet[2704]: E1013 05:56:19.962441 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.962650 kubelet[2704]: E1013 05:56:19.962614 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.962650 kubelet[2704]: W1013 05:56:19.962640 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.962699 kubelet[2704]: E1013 05:56:19.962656 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.962891 kubelet[2704]: E1013 05:56:19.962866 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.962922 kubelet[2704]: W1013 05:56:19.962888 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.962922 kubelet[2704]: E1013 05:56:19.962918 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.963106 kubelet[2704]: E1013 05:56:19.963090 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.963106 kubelet[2704]: W1013 05:56:19.963101 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.963149 kubelet[2704]: E1013 05:56:19.963116 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.963292 kubelet[2704]: E1013 05:56:19.963278 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.963292 kubelet[2704]: W1013 05:56:19.963287 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.963359 kubelet[2704]: E1013 05:56:19.963299 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.963483 kubelet[2704]: E1013 05:56:19.963468 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.963483 kubelet[2704]: W1013 05:56:19.963480 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.963531 kubelet[2704]: E1013 05:56:19.963492 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.963690 kubelet[2704]: E1013 05:56:19.963675 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.963690 kubelet[2704]: W1013 05:56:19.963685 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.963745 kubelet[2704]: E1013 05:56:19.963698 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.963958 kubelet[2704]: E1013 05:56:19.963933 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.963958 kubelet[2704]: W1013 05:56:19.963947 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.964007 kubelet[2704]: E1013 05:56:19.963962 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.964132 kubelet[2704]: E1013 05:56:19.964118 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.964132 kubelet[2704]: W1013 05:56:19.964128 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.964180 kubelet[2704]: E1013 05:56:19.964155 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.964297 kubelet[2704]: E1013 05:56:19.964283 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.964297 kubelet[2704]: W1013 05:56:19.964293 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.964366 kubelet[2704]: E1013 05:56:19.964317 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.964482 kubelet[2704]: E1013 05:56:19.964468 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.964482 kubelet[2704]: W1013 05:56:19.964478 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.964532 kubelet[2704]: E1013 05:56:19.964490 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.964699 kubelet[2704]: E1013 05:56:19.964684 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.964699 kubelet[2704]: W1013 05:56:19.964694 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.964744 kubelet[2704]: E1013 05:56:19.964706 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.964963 kubelet[2704]: E1013 05:56:19.964945 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.964963 kubelet[2704]: W1013 05:56:19.964957 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.965021 kubelet[2704]: E1013 05:56:19.964971 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.965173 kubelet[2704]: E1013 05:56:19.965158 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.965173 kubelet[2704]: W1013 05:56:19.965168 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.965222 kubelet[2704]: E1013 05:56:19.965181 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.965443 kubelet[2704]: E1013 05:56:19.965426 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.965443 kubelet[2704]: W1013 05:56:19.965438 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.965499 kubelet[2704]: E1013 05:56:19.965447 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:19.965617 kubelet[2704]: E1013 05:56:19.965602 2704 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:19.965617 kubelet[2704]: W1013 05:56:19.965613 2704 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:19.965671 kubelet[2704]: E1013 05:56:19.965621 2704 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.764028 containerd[1563]: time="2025-10-13T05:56:20.763971863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:20.764831 containerd[1563]: time="2025-10-13T05:56:20.764809329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:56:20.765965 containerd[1563]: time="2025-10-13T05:56:20.765915783Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:20.767992 containerd[1563]: time="2025-10-13T05:56:20.767961287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:20.768508 containerd[1563]: time="2025-10-13T05:56:20.768466283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.536684826s" Oct 13 05:56:20.768508 containerd[1563]: time="2025-10-13T05:56:20.768505006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:56:20.770299 containerd[1563]: time="2025-10-13T05:56:20.770276260Z" level=info msg="CreateContainer within sandbox \"c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:56:20.779519 containerd[1563]: time="2025-10-13T05:56:20.779471542Z" level=info msg="Container dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:20.791310 containerd[1563]: time="2025-10-13T05:56:20.791274391Z" level=info msg="CreateContainer within sandbox \"c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b\"" Oct 13 05:56:20.791932 containerd[1563]: time="2025-10-13T05:56:20.791903843Z" level=info msg="StartContainer for \"dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b\"" Oct 13 05:56:20.793270 containerd[1563]: time="2025-10-13T05:56:20.793237689Z" level=info msg="connecting to shim dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b" address="unix:///run/containerd/s/65dd7b6fcc5d0c11aef9247e7b8321b54c1f57df7780daa56e283fb8c9129257" protocol=ttrpc version=3 Oct 13 05:56:20.812527 systemd[1]: Started cri-containerd-dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b.scope - libcontainer container dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b. Oct 13 05:56:20.866682 systemd[1]: cri-containerd-dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b.scope: Deactivated successfully. Oct 13 05:56:20.869807 containerd[1563]: time="2025-10-13T05:56:20.869766920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b\" id:\"dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b\" pid:3415 exited_at:{seconds:1760334980 nanos:869298513}" Oct 13 05:56:20.906027 containerd[1563]: time="2025-10-13T05:56:20.905983958Z" level=info msg="received exit event container_id:\"dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b\" id:\"dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b\" pid:3415 exited_at:{seconds:1760334980 nanos:869298513}" Oct 13 05:56:20.907569 containerd[1563]: time="2025-10-13T05:56:20.907527361Z" level=info msg="StartContainer for \"dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b\" returns successfully" Oct 13 05:56:20.910360 kubelet[2704]: I1013 05:56:20.909381 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:20.910360 kubelet[2704]: E1013 05:56:20.909730 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:20.927353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbe00092ad797fe2f66be954ba24cb3e2d702e8087b2b1106ae0af4ea925fc1b-rootfs.mount: Deactivated successfully. Oct 13 05:56:21.824098 kubelet[2704]: E1013 05:56:21.824046 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q48s" podUID="5abc9329-79bf-4376-a56b-5b3a9919ac87" Oct 13 05:56:21.913354 containerd[1563]: time="2025-10-13T05:56:21.913292420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:56:21.923878 kubelet[2704]: I1013 05:56:21.923807 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54db5cd7d4-s2htf" podStartSLOduration=3.396537257 podStartE2EDuration="6.923793032s" podCreationTimestamp="2025-10-13 05:56:15 +0000 UTC" firstStartedPulling="2025-10-13 05:56:15.704370078 +0000 UTC m=+15.965873507" lastFinishedPulling="2025-10-13 05:56:19.231625853 +0000 UTC m=+19.493129282" observedRunningTime="2025-10-13 05:56:19.880037718 +0000 UTC m=+20.141541137" watchObservedRunningTime="2025-10-13 05:56:21.923793032 +0000 UTC m=+22.185296461" Oct 13 05:56:23.824132 kubelet[2704]: E1013 05:56:23.824069 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q48s" podUID="5abc9329-79bf-4376-a56b-5b3a9919ac87" Oct 13 05:56:25.696211 containerd[1563]: time="2025-10-13T05:56:25.696163264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:25.696947 containerd[1563]: time="2025-10-13T05:56:25.696915715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:56:25.698139 containerd[1563]: time="2025-10-13T05:56:25.698107105Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:25.700054 containerd[1563]: time="2025-10-13T05:56:25.700030588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:25.700615 containerd[1563]: time="2025-10-13T05:56:25.700576248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.78724297s" Oct 13 05:56:25.700615 containerd[1563]: time="2025-10-13T05:56:25.700612136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:56:25.702449 containerd[1563]: time="2025-10-13T05:56:25.702414099Z" level=info msg="CreateContainer within sandbox \"c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:56:25.715348 containerd[1563]: time="2025-10-13T05:56:25.713653453Z" level=info msg="Container bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:25.723832 containerd[1563]: time="2025-10-13T05:56:25.723790827Z" level=info msg="CreateContainer within sandbox \"c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd\"" Oct 13 05:56:25.724208 containerd[1563]: time="2025-10-13T05:56:25.724163191Z" level=info msg="StartContainer for \"bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd\"" Oct 13 05:56:25.725448 containerd[1563]: time="2025-10-13T05:56:25.725416016Z" level=info msg="connecting to shim bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd" address="unix:///run/containerd/s/65dd7b6fcc5d0c11aef9247e7b8321b54c1f57df7780daa56e283fb8c9129257" protocol=ttrpc version=3 Oct 13 05:56:25.749456 systemd[1]: Started cri-containerd-bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd.scope - libcontainer container bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd. Oct 13 05:56:25.824049 kubelet[2704]: E1013 05:56:25.823993 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q48s" podUID="5abc9329-79bf-4376-a56b-5b3a9919ac87" Oct 13 05:56:25.974266 containerd[1563]: time="2025-10-13T05:56:25.973980752Z" level=info msg="StartContainer for \"bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd\" returns successfully" Oct 13 05:56:26.818715 systemd[1]: cri-containerd-bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd.scope: Deactivated successfully. Oct 13 05:56:26.819064 systemd[1]: cri-containerd-bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd.scope: Consumed 574ms CPU time, 175.2M memory peak, 3.6M read from disk, 171.3M written to disk. Oct 13 05:56:26.820502 containerd[1563]: time="2025-10-13T05:56:26.819399166Z" level=info msg="received exit event container_id:\"bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd\" id:\"bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd\" pid:3471 exited_at:{seconds:1760334986 nanos:819190422}" Oct 13 05:56:26.820502 containerd[1563]: time="2025-10-13T05:56:26.819609343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd\" id:\"bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd\" pid:3471 exited_at:{seconds:1760334986 nanos:819190422}" Oct 13 05:56:26.825667 containerd[1563]: time="2025-10-13T05:56:26.825630548Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:56:26.836749 kubelet[2704]: I1013 05:56:26.836719 2704 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:56:26.845478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdec9e5a4a7d4b828257cdd186f8e59a3c21b224ad34c730d75cce025cf687bd-rootfs.mount: Deactivated successfully. Oct 13 05:56:26.911117 systemd[1]: Created slice kubepods-burstable-pod663831a9_2ae8_4c19_a75a_c1031b55c603.slice - libcontainer container kubepods-burstable-pod663831a9_2ae8_4c19_a75a_c1031b55c603.slice. Oct 13 05:56:26.919906 systemd[1]: Created slice kubepods-burstable-pod106ebfaf_472b_4ba4_a2aa_42f20bb87a67.slice - libcontainer container kubepods-burstable-pod106ebfaf_472b_4ba4_a2aa_42f20bb87a67.slice. Oct 13 05:56:26.926321 systemd[1]: Created slice kubepods-besteffort-pod79e5bdb8_a9fd_46f0_813e_b805ce528b77.slice - libcontainer container kubepods-besteffort-pod79e5bdb8_a9fd_46f0_813e_b805ce528b77.slice. Oct 13 05:56:26.932643 systemd[1]: Created slice kubepods-besteffort-poda94f0e1d_5cb5_4ff7_b69a_7e3ca45e6525.slice - libcontainer container kubepods-besteffort-poda94f0e1d_5cb5_4ff7_b69a_7e3ca45e6525.slice. Oct 13 05:56:26.939897 systemd[1]: Created slice kubepods-besteffort-pod4bc401f8_87da_4fda_9d7a_e345ca9bee74.slice - libcontainer container kubepods-besteffort-pod4bc401f8_87da_4fda_9d7a_e345ca9bee74.slice. Oct 13 05:56:26.946006 systemd[1]: Created slice kubepods-besteffort-podf3af9472_5fc6_435b_aad6_baa386269104.slice - libcontainer container kubepods-besteffort-podf3af9472_5fc6_435b_aad6_baa386269104.slice. Oct 13 05:56:26.953899 systemd[1]: Created slice kubepods-besteffort-podaa70c35a_019c_4f74_8ce2_7de70ff78eb2.slice - libcontainer container kubepods-besteffort-podaa70c35a_019c_4f74_8ce2_7de70ff78eb2.slice. Oct 13 05:56:26.980266 containerd[1563]: time="2025-10-13T05:56:26.980228205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:56:27.011437 kubelet[2704]: I1013 05:56:27.011400 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsz5\" (UniqueName: \"kubernetes.io/projected/f3af9472-5fc6-435b-aad6-baa386269104-kube-api-access-qzsz5\") pod \"calico-apiserver-7cdfd954d7-pnn9d\" (UID: \"f3af9472-5fc6-435b-aad6-baa386269104\") " pod="calico-apiserver/calico-apiserver-7cdfd954d7-pnn9d" Oct 13 05:56:27.011437 kubelet[2704]: I1013 05:56:27.011440 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79e5bdb8-a9fd-46f0-813e-b805ce528b77-tigera-ca-bundle\") pod \"calico-kube-controllers-5d4d94c744-wwchw\" (UID: \"79e5bdb8-a9fd-46f0-813e-b805ce528b77\") " pod="calico-system/calico-kube-controllers-5d4d94c744-wwchw" Oct 13 05:56:27.011587 kubelet[2704]: I1013 05:56:27.011459 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-backend-key-pair\") pod \"whisker-78654f6c88-dh6cw\" (UID: \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\") " pod="calico-system/whisker-78654f6c88-dh6cw" Oct 13 05:56:27.011587 kubelet[2704]: I1013 05:56:27.011474 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-7qjqx\" (UID: \"a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525\") " pod="calico-system/goldmane-54d579b49d-7qjqx" Oct 13 05:56:27.011587 kubelet[2704]: I1013 05:56:27.011492 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525-goldmane-key-pair\") pod \"goldmane-54d579b49d-7qjqx\" (UID: \"a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525\") " pod="calico-system/goldmane-54d579b49d-7qjqx" Oct 13 05:56:27.011587 kubelet[2704]: I1013 05:56:27.011511 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/106ebfaf-472b-4ba4-a2aa-42f20bb87a67-config-volume\") pod \"coredns-668d6bf9bc-wsm6v\" (UID: \"106ebfaf-472b-4ba4-a2aa-42f20bb87a67\") " pod="kube-system/coredns-668d6bf9bc-wsm6v" Oct 13 05:56:27.011587 kubelet[2704]: I1013 05:56:27.011530 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa70c35a-019c-4f74-8ce2-7de70ff78eb2-calico-apiserver-certs\") pod \"calico-apiserver-7cdfd954d7-h24nw\" (UID: \"aa70c35a-019c-4f74-8ce2-7de70ff78eb2\") " pod="calico-apiserver/calico-apiserver-7cdfd954d7-h24nw" Oct 13 05:56:27.011714 kubelet[2704]: I1013 05:56:27.011556 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnktk\" (UniqueName: \"kubernetes.io/projected/aa70c35a-019c-4f74-8ce2-7de70ff78eb2-kube-api-access-fnktk\") pod \"calico-apiserver-7cdfd954d7-h24nw\" (UID: \"aa70c35a-019c-4f74-8ce2-7de70ff78eb2\") " pod="calico-apiserver/calico-apiserver-7cdfd954d7-h24nw" Oct 13 05:56:27.011714 kubelet[2704]: I1013 05:56:27.011584 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wm4h\" (UniqueName: \"kubernetes.io/projected/106ebfaf-472b-4ba4-a2aa-42f20bb87a67-kube-api-access-5wm4h\") pod \"coredns-668d6bf9bc-wsm6v\" (UID: \"106ebfaf-472b-4ba4-a2aa-42f20bb87a67\") " pod="kube-system/coredns-668d6bf9bc-wsm6v" Oct 13 05:56:27.011714 kubelet[2704]: I1013 05:56:27.011614 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99pn\" (UniqueName: \"kubernetes.io/projected/4bc401f8-87da-4fda-9d7a-e345ca9bee74-kube-api-access-s99pn\") pod \"whisker-78654f6c88-dh6cw\" (UID: \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\") " pod="calico-system/whisker-78654f6c88-dh6cw" Oct 13 05:56:27.011714 kubelet[2704]: I1013 05:56:27.011632 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q77zv\" (UniqueName: \"kubernetes.io/projected/663831a9-2ae8-4c19-a75a-c1031b55c603-kube-api-access-q77zv\") pod \"coredns-668d6bf9bc-95fxg\" (UID: \"663831a9-2ae8-4c19-a75a-c1031b55c603\") " pod="kube-system/coredns-668d6bf9bc-95fxg" Oct 13 05:56:27.011714 kubelet[2704]: I1013 05:56:27.011649 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnlgv\" (UniqueName: \"kubernetes.io/projected/79e5bdb8-a9fd-46f0-813e-b805ce528b77-kube-api-access-rnlgv\") pod \"calico-kube-controllers-5d4d94c744-wwchw\" (UID: \"79e5bdb8-a9fd-46f0-813e-b805ce528b77\") " pod="calico-system/calico-kube-controllers-5d4d94c744-wwchw" Oct 13 05:56:27.011840 kubelet[2704]: I1013 05:56:27.011666 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525-config\") pod \"goldmane-54d579b49d-7qjqx\" (UID: \"a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525\") " pod="calico-system/goldmane-54d579b49d-7qjqx" Oct 13 05:56:27.011840 kubelet[2704]: I1013 05:56:27.011690 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/663831a9-2ae8-4c19-a75a-c1031b55c603-config-volume\") pod \"coredns-668d6bf9bc-95fxg\" (UID: \"663831a9-2ae8-4c19-a75a-c1031b55c603\") " pod="kube-system/coredns-668d6bf9bc-95fxg" Oct 13 05:56:27.011840 kubelet[2704]: I1013 05:56:27.011705 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f3af9472-5fc6-435b-aad6-baa386269104-calico-apiserver-certs\") pod \"calico-apiserver-7cdfd954d7-pnn9d\" (UID: \"f3af9472-5fc6-435b-aad6-baa386269104\") " pod="calico-apiserver/calico-apiserver-7cdfd954d7-pnn9d" Oct 13 05:56:27.011840 kubelet[2704]: I1013 05:56:27.011719 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phtbr\" (UniqueName: \"kubernetes.io/projected/a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525-kube-api-access-phtbr\") pod \"goldmane-54d579b49d-7qjqx\" (UID: \"a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525\") " pod="calico-system/goldmane-54d579b49d-7qjqx" Oct 13 05:56:27.011840 kubelet[2704]: I1013 05:56:27.011734 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-ca-bundle\") pod \"whisker-78654f6c88-dh6cw\" (UID: \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\") " pod="calico-system/whisker-78654f6c88-dh6cw" Oct 13 05:56:27.215458 kubelet[2704]: E1013 05:56:27.215327 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:27.216164 containerd[1563]: time="2025-10-13T05:56:27.216123059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95fxg,Uid:663831a9-2ae8-4c19-a75a-c1031b55c603,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:27.223761 kubelet[2704]: E1013 05:56:27.223726 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:27.224380 containerd[1563]: time="2025-10-13T05:56:27.224352906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsm6v,Uid:106ebfaf-472b-4ba4-a2aa-42f20bb87a67,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:27.230021 containerd[1563]: time="2025-10-13T05:56:27.229979272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4d94c744-wwchw,Uid:79e5bdb8-a9fd-46f0-813e-b805ce528b77,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:27.238384 containerd[1563]: time="2025-10-13T05:56:27.238311293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7qjqx,Uid:a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:27.242649 containerd[1563]: time="2025-10-13T05:56:27.242623529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78654f6c88-dh6cw,Uid:4bc401f8-87da-4fda-9d7a-e345ca9bee74,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:27.252111 containerd[1563]: time="2025-10-13T05:56:27.251986646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-pnn9d,Uid:f3af9472-5fc6-435b-aad6-baa386269104,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:27.258591 containerd[1563]: time="2025-10-13T05:56:27.258516607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-h24nw,Uid:aa70c35a-019c-4f74-8ce2-7de70ff78eb2,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:27.334646 containerd[1563]: time="2025-10-13T05:56:27.334592125Z" level=error msg="Failed to destroy network for sandbox \"c4de15c17b91aa94e7b2cb69ea034b60bbc61b6ade479a06aeb66e267ad9ab0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.336279 containerd[1563]: time="2025-10-13T05:56:27.336253681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95fxg,Uid:663831a9-2ae8-4c19-a75a-c1031b55c603,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4de15c17b91aa94e7b2cb69ea034b60bbc61b6ade479a06aeb66e267ad9ab0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.344022 containerd[1563]: time="2025-10-13T05:56:27.343961404Z" level=error msg="Failed to destroy network for sandbox \"b25d546eb40ba452676a9e9dadb0d930edb90cfa9cc3dc19755c870a0c86b2d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.345674 containerd[1563]: time="2025-10-13T05:56:27.345327651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsm6v,Uid:106ebfaf-472b-4ba4-a2aa-42f20bb87a67,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25d546eb40ba452676a9e9dadb0d930edb90cfa9cc3dc19755c870a0c86b2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.346127 kubelet[2704]: E1013 05:56:27.345733 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25d546eb40ba452676a9e9dadb0d930edb90cfa9cc3dc19755c870a0c86b2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.346127 kubelet[2704]: E1013 05:56:27.345770 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4de15c17b91aa94e7b2cb69ea034b60bbc61b6ade479a06aeb66e267ad9ab0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.346127 kubelet[2704]: E1013 05:56:27.345819 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25d546eb40ba452676a9e9dadb0d930edb90cfa9cc3dc19755c870a0c86b2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wsm6v" Oct 13 05:56:27.346127 kubelet[2704]: E1013 05:56:27.345829 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4de15c17b91aa94e7b2cb69ea034b60bbc61b6ade479a06aeb66e267ad9ab0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-95fxg" Oct 13 05:56:27.346352 kubelet[2704]: E1013 05:56:27.345842 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25d546eb40ba452676a9e9dadb0d930edb90cfa9cc3dc19755c870a0c86b2d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wsm6v" Oct 13 05:56:27.346352 kubelet[2704]: E1013 05:56:27.345852 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4de15c17b91aa94e7b2cb69ea034b60bbc61b6ade479a06aeb66e267ad9ab0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-95fxg" Oct 13 05:56:27.346352 kubelet[2704]: E1013 05:56:27.345889 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wsm6v_kube-system(106ebfaf-472b-4ba4-a2aa-42f20bb87a67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wsm6v_kube-system(106ebfaf-472b-4ba4-a2aa-42f20bb87a67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b25d546eb40ba452676a9e9dadb0d930edb90cfa9cc3dc19755c870a0c86b2d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wsm6v" podUID="106ebfaf-472b-4ba4-a2aa-42f20bb87a67" Oct 13 05:56:27.346474 kubelet[2704]: E1013 05:56:27.345895 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-95fxg_kube-system(663831a9-2ae8-4c19-a75a-c1031b55c603)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-95fxg_kube-system(663831a9-2ae8-4c19-a75a-c1031b55c603)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4de15c17b91aa94e7b2cb69ea034b60bbc61b6ade479a06aeb66e267ad9ab0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-95fxg" podUID="663831a9-2ae8-4c19-a75a-c1031b55c603" Oct 13 05:56:27.349665 containerd[1563]: time="2025-10-13T05:56:27.349633025Z" level=error msg="Failed to destroy network for sandbox \"13548771a6fd0389a050ec7d4e9663ae48de9a2e21f944454bf963462c27049c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.351275 containerd[1563]: time="2025-10-13T05:56:27.351232292Z" level=error msg="Failed to destroy network for sandbox \"b6caa85302597dcb735f994b8172ba3f9c8b80adb945746e3f7053003a48be0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.353650 containerd[1563]: time="2025-10-13T05:56:27.353613796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4d94c744-wwchw,Uid:79e5bdb8-a9fd-46f0-813e-b805ce528b77,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13548771a6fd0389a050ec7d4e9663ae48de9a2e21f944454bf963462c27049c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.354194 kubelet[2704]: E1013 05:56:27.353930 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13548771a6fd0389a050ec7d4e9663ae48de9a2e21f944454bf963462c27049c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.354194 kubelet[2704]: E1013 05:56:27.354076 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13548771a6fd0389a050ec7d4e9663ae48de9a2e21f944454bf963462c27049c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d4d94c744-wwchw" Oct 13 05:56:27.354194 kubelet[2704]: E1013 05:56:27.354095 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13548771a6fd0389a050ec7d4e9663ae48de9a2e21f944454bf963462c27049c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d4d94c744-wwchw" Oct 13 05:56:27.354511 kubelet[2704]: E1013 05:56:27.354136 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d4d94c744-wwchw_calico-system(79e5bdb8-a9fd-46f0-813e-b805ce528b77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d4d94c744-wwchw_calico-system(79e5bdb8-a9fd-46f0-813e-b805ce528b77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13548771a6fd0389a050ec7d4e9663ae48de9a2e21f944454bf963462c27049c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d4d94c744-wwchw" podUID="79e5bdb8-a9fd-46f0-813e-b805ce528b77" Oct 13 05:56:27.355043 containerd[1563]: time="2025-10-13T05:56:27.354978420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-pnn9d,Uid:f3af9472-5fc6-435b-aad6-baa386269104,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6caa85302597dcb735f994b8172ba3f9c8b80adb945746e3f7053003a48be0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.355591 kubelet[2704]: E1013 05:56:27.355407 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6caa85302597dcb735f994b8172ba3f9c8b80adb945746e3f7053003a48be0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.356024 containerd[1563]: time="2025-10-13T05:56:27.355918484Z" level=error msg="Failed to destroy network for sandbox \"73598a7c267b3ad71196684eef7fab66ff06275fab913faca3ce197f0683095f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.356221 kubelet[2704]: E1013 05:56:27.356196 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6caa85302597dcb735f994b8172ba3f9c8b80adb945746e3f7053003a48be0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdfd954d7-pnn9d" Oct 13 05:56:27.356287 kubelet[2704]: E1013 05:56:27.356221 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6caa85302597dcb735f994b8172ba3f9c8b80adb945746e3f7053003a48be0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdfd954d7-pnn9d" Oct 13 05:56:27.356287 kubelet[2704]: E1013 05:56:27.356262 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cdfd954d7-pnn9d_calico-apiserver(f3af9472-5fc6-435b-aad6-baa386269104)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cdfd954d7-pnn9d_calico-apiserver(f3af9472-5fc6-435b-aad6-baa386269104)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6caa85302597dcb735f994b8172ba3f9c8b80adb945746e3f7053003a48be0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cdfd954d7-pnn9d" podUID="f3af9472-5fc6-435b-aad6-baa386269104" Oct 13 05:56:27.357362 containerd[1563]: time="2025-10-13T05:56:27.357252110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78654f6c88-dh6cw,Uid:4bc401f8-87da-4fda-9d7a-e345ca9bee74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73598a7c267b3ad71196684eef7fab66ff06275fab913faca3ce197f0683095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.357891 kubelet[2704]: E1013 05:56:27.357820 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73598a7c267b3ad71196684eef7fab66ff06275fab913faca3ce197f0683095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.358095 kubelet[2704]: E1013 05:56:27.358039 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73598a7c267b3ad71196684eef7fab66ff06275fab913faca3ce197f0683095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78654f6c88-dh6cw" Oct 13 05:56:27.358095 kubelet[2704]: E1013 05:56:27.358066 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73598a7c267b3ad71196684eef7fab66ff06275fab913faca3ce197f0683095f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78654f6c88-dh6cw" Oct 13 05:56:27.358281 kubelet[2704]: E1013 05:56:27.358218 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78654f6c88-dh6cw_calico-system(4bc401f8-87da-4fda-9d7a-e345ca9bee74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78654f6c88-dh6cw_calico-system(4bc401f8-87da-4fda-9d7a-e345ca9bee74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73598a7c267b3ad71196684eef7fab66ff06275fab913faca3ce197f0683095f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78654f6c88-dh6cw" podUID="4bc401f8-87da-4fda-9d7a-e345ca9bee74" Oct 13 05:56:27.367569 containerd[1563]: time="2025-10-13T05:56:27.367502661Z" level=error msg="Failed to destroy network for sandbox \"31c0731361c9327cd481de6c0fc93802973e76b1fddc2c227d4dd2e1efe71e62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.369145 containerd[1563]: time="2025-10-13T05:56:27.369092371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7qjqx,Uid:a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c0731361c9327cd481de6c0fc93802973e76b1fddc2c227d4dd2e1efe71e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.369321 kubelet[2704]: E1013 05:56:27.369289 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c0731361c9327cd481de6c0fc93802973e76b1fddc2c227d4dd2e1efe71e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.369387 kubelet[2704]: E1013 05:56:27.369367 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c0731361c9327cd481de6c0fc93802973e76b1fddc2c227d4dd2e1efe71e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7qjqx" Oct 13 05:56:27.369425 kubelet[2704]: E1013 05:56:27.369388 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c0731361c9327cd481de6c0fc93802973e76b1fddc2c227d4dd2e1efe71e62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-7qjqx" Oct 13 05:56:27.369466 kubelet[2704]: E1013 05:56:27.369421 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-7qjqx_calico-system(a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-7qjqx_calico-system(a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c0731361c9327cd481de6c0fc93802973e76b1fddc2c227d4dd2e1efe71e62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-7qjqx" podUID="a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525" Oct 13 05:56:27.373205 containerd[1563]: time="2025-10-13T05:56:27.373167249Z" level=error msg="Failed to destroy network for sandbox \"e82f225a00996bc3cd24127d500bba685fe64b2e498a3028570ed382534f3646\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.374513 containerd[1563]: time="2025-10-13T05:56:27.374471921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-h24nw,Uid:aa70c35a-019c-4f74-8ce2-7de70ff78eb2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e82f225a00996bc3cd24127d500bba685fe64b2e498a3028570ed382534f3646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.374695 kubelet[2704]: E1013 05:56:27.374659 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e82f225a00996bc3cd24127d500bba685fe64b2e498a3028570ed382534f3646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.374735 kubelet[2704]: E1013 05:56:27.374714 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e82f225a00996bc3cd24127d500bba685fe64b2e498a3028570ed382534f3646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdfd954d7-h24nw" Oct 13 05:56:27.374759 kubelet[2704]: E1013 05:56:27.374734 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e82f225a00996bc3cd24127d500bba685fe64b2e498a3028570ed382534f3646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cdfd954d7-h24nw" Oct 13 05:56:27.374804 kubelet[2704]: E1013 05:56:27.374780 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cdfd954d7-h24nw_calico-apiserver(aa70c35a-019c-4f74-8ce2-7de70ff78eb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cdfd954d7-h24nw_calico-apiserver(aa70c35a-019c-4f74-8ce2-7de70ff78eb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e82f225a00996bc3cd24127d500bba685fe64b2e498a3028570ed382534f3646\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cdfd954d7-h24nw" podUID="aa70c35a-019c-4f74-8ce2-7de70ff78eb2" Oct 13 05:56:27.828902 systemd[1]: Created slice kubepods-besteffort-pod5abc9329_79bf_4376_a56b_5b3a9919ac87.slice - libcontainer container kubepods-besteffort-pod5abc9329_79bf_4376_a56b_5b3a9919ac87.slice. Oct 13 05:56:27.831410 containerd[1563]: time="2025-10-13T05:56:27.831374946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q48s,Uid:5abc9329-79bf-4376-a56b-5b3a9919ac87,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:27.882550 containerd[1563]: time="2025-10-13T05:56:27.882488767Z" level=error msg="Failed to destroy network for sandbox \"431eba0a767637079b08e522464eb6b4cac277f99cdf163d5fd08f89f18ad36a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.883865 containerd[1563]: time="2025-10-13T05:56:27.883834487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q48s,Uid:5abc9329-79bf-4376-a56b-5b3a9919ac87,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"431eba0a767637079b08e522464eb6b4cac277f99cdf163d5fd08f89f18ad36a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.884112 kubelet[2704]: E1013 05:56:27.884042 2704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431eba0a767637079b08e522464eb6b4cac277f99cdf163d5fd08f89f18ad36a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:27.884112 kubelet[2704]: E1013 05:56:27.884102 2704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431eba0a767637079b08e522464eb6b4cac277f99cdf163d5fd08f89f18ad36a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6q48s" Oct 13 05:56:27.884528 kubelet[2704]: E1013 05:56:27.884122 2704 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431eba0a767637079b08e522464eb6b4cac277f99cdf163d5fd08f89f18ad36a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6q48s" Oct 13 05:56:27.884528 kubelet[2704]: E1013 05:56:27.884177 2704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6q48s_calico-system(5abc9329-79bf-4376-a56b-5b3a9919ac87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6q48s_calico-system(5abc9329-79bf-4376-a56b-5b3a9919ac87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"431eba0a767637079b08e522464eb6b4cac277f99cdf163d5fd08f89f18ad36a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6q48s" podUID="5abc9329-79bf-4376-a56b-5b3a9919ac87" Oct 13 05:56:27.884726 systemd[1]: run-netns-cni\x2dee088728\x2dc9e3\x2d3134\x2d69e4\x2df24b7501d46c.mount: Deactivated successfully. Oct 13 05:56:34.967476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2088991790.mount: Deactivated successfully. Oct 13 05:56:35.606088 containerd[1563]: time="2025-10-13T05:56:35.606043153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:35.607023 containerd[1563]: time="2025-10-13T05:56:35.606974385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:56:35.608390 containerd[1563]: time="2025-10-13T05:56:35.608351958Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:35.610302 containerd[1563]: time="2025-10-13T05:56:35.610270339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:35.610777 containerd[1563]: time="2025-10-13T05:56:35.610746034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.63048136s" Oct 13 05:56:35.610777 containerd[1563]: time="2025-10-13T05:56:35.610773235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:56:35.620562 containerd[1563]: time="2025-10-13T05:56:35.620530288Z" level=info msg="CreateContainer within sandbox \"c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:56:35.640701 containerd[1563]: time="2025-10-13T05:56:35.640662719Z" level=info msg="Container f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:35.657294 containerd[1563]: time="2025-10-13T05:56:35.657257581Z" level=info msg="CreateContainer within sandbox \"c32b8eec78edd841a0c72ae96a98597e0387f3a26178c5b5b52f53ceda8cb93e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b\"" Oct 13 05:56:35.657660 containerd[1563]: time="2025-10-13T05:56:35.657621266Z" level=info msg="StartContainer for \"f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b\"" Oct 13 05:56:35.663109 containerd[1563]: time="2025-10-13T05:56:35.663066684Z" level=info msg="connecting to shim f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b" address="unix:///run/containerd/s/65dd7b6fcc5d0c11aef9247e7b8321b54c1f57df7780daa56e283fb8c9129257" protocol=ttrpc version=3 Oct 13 05:56:35.736472 systemd[1]: Started cri-containerd-f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b.scope - libcontainer container f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b. Oct 13 05:56:35.925295 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:56:35.925854 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:56:35.990835 containerd[1563]: time="2025-10-13T05:56:35.990793419Z" level=info msg="StartContainer for \"f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b\" returns successfully" Oct 13 05:56:36.164939 kubelet[2704]: I1013 05:56:36.164891 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-ca-bundle\") pod \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\" (UID: \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\") " Oct 13 05:56:36.164939 kubelet[2704]: I1013 05:56:36.164943 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-backend-key-pair\") pod \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\" (UID: \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\") " Oct 13 05:56:36.165415 kubelet[2704]: I1013 05:56:36.164970 2704 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s99pn\" (UniqueName: \"kubernetes.io/projected/4bc401f8-87da-4fda-9d7a-e345ca9bee74-kube-api-access-s99pn\") pod \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\" (UID: \"4bc401f8-87da-4fda-9d7a-e345ca9bee74\") " Oct 13 05:56:36.166831 kubelet[2704]: I1013 05:56:36.166687 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4bc401f8-87da-4fda-9d7a-e345ca9bee74" (UID: "4bc401f8-87da-4fda-9d7a-e345ca9bee74"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:56:36.171091 systemd[1]: var-lib-kubelet-pods-4bc401f8\x2d87da\x2d4fda\x2d9d7a\x2de345ca9bee74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds99pn.mount: Deactivated successfully. Oct 13 05:56:36.171206 systemd[1]: var-lib-kubelet-pods-4bc401f8\x2d87da\x2d4fda\x2d9d7a\x2de345ca9bee74-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:56:36.173192 kubelet[2704]: I1013 05:56:36.173157 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc401f8-87da-4fda-9d7a-e345ca9bee74-kube-api-access-s99pn" (OuterVolumeSpecName: "kube-api-access-s99pn") pod "4bc401f8-87da-4fda-9d7a-e345ca9bee74" (UID: "4bc401f8-87da-4fda-9d7a-e345ca9bee74"). InnerVolumeSpecName "kube-api-access-s99pn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:56:36.173233 kubelet[2704]: I1013 05:56:36.173180 2704 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4bc401f8-87da-4fda-9d7a-e345ca9bee74" (UID: "4bc401f8-87da-4fda-9d7a-e345ca9bee74"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:56:36.266086 kubelet[2704]: I1013 05:56:36.265986 2704 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:36.266086 kubelet[2704]: I1013 05:56:36.266024 2704 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s99pn\" (UniqueName: \"kubernetes.io/projected/4bc401f8-87da-4fda-9d7a-e345ca9bee74-kube-api-access-s99pn\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:36.266086 kubelet[2704]: I1013 05:56:36.266040 2704 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bc401f8-87da-4fda-9d7a-e345ca9bee74-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:36.616164 systemd[1]: Removed slice kubepods-besteffort-pod4bc401f8_87da_4fda_9d7a_e345ca9bee74.slice - libcontainer container kubepods-besteffort-pod4bc401f8_87da_4fda_9d7a_e345ca9bee74.slice. Oct 13 05:56:36.625828 kubelet[2704]: I1013 05:56:36.625729 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hrjll" podStartSLOduration=2.00998578 podStartE2EDuration="21.625715724s" podCreationTimestamp="2025-10-13 05:56:15 +0000 UTC" firstStartedPulling="2025-10-13 05:56:15.995596742 +0000 UTC m=+16.257100172" lastFinishedPulling="2025-10-13 05:56:35.611326687 +0000 UTC m=+35.872830116" observedRunningTime="2025-10-13 05:56:36.624687069 +0000 UTC m=+36.886190498" watchObservedRunningTime="2025-10-13 05:56:36.625715724 +0000 UTC m=+36.887219153" Oct 13 05:56:36.663864 systemd[1]: Created slice kubepods-besteffort-podf3cc0004_b4c7_442a_9dbd_e59619745141.slice - libcontainer container kubepods-besteffort-podf3cc0004_b4c7_442a_9dbd_e59619745141.slice. Oct 13 05:56:36.768189 kubelet[2704]: I1013 05:56:36.768140 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f3cc0004-b4c7-442a-9dbd-e59619745141-whisker-backend-key-pair\") pod \"whisker-65bdc5dc55-n46kc\" (UID: \"f3cc0004-b4c7-442a-9dbd-e59619745141\") " pod="calico-system/whisker-65bdc5dc55-n46kc" Oct 13 05:56:36.768189 kubelet[2704]: I1013 05:56:36.768192 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3cc0004-b4c7-442a-9dbd-e59619745141-whisker-ca-bundle\") pod \"whisker-65bdc5dc55-n46kc\" (UID: \"f3cc0004-b4c7-442a-9dbd-e59619745141\") " pod="calico-system/whisker-65bdc5dc55-n46kc" Oct 13 05:56:36.768189 kubelet[2704]: I1013 05:56:36.768208 2704 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5wc\" (UniqueName: \"kubernetes.io/projected/f3cc0004-b4c7-442a-9dbd-e59619745141-kube-api-access-5j5wc\") pod \"whisker-65bdc5dc55-n46kc\" (UID: \"f3cc0004-b4c7-442a-9dbd-e59619745141\") " pod="calico-system/whisker-65bdc5dc55-n46kc" Oct 13 05:56:36.968163 containerd[1563]: time="2025-10-13T05:56:36.968043850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bdc5dc55-n46kc,Uid:f3cc0004-b4c7-442a-9dbd-e59619745141,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:37.521401 systemd-networkd[1475]: calice3f60532e8: Link UP Oct 13 05:56:37.521932 systemd-networkd[1475]: calice3f60532e8: Gained carrier Oct 13 05:56:37.535065 containerd[1563]: 2025-10-13 05:56:36.990 [INFO][3848] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:56:37.535065 containerd[1563]: 2025-10-13 05:56:37.005 [INFO][3848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--65bdc5dc55--n46kc-eth0 whisker-65bdc5dc55- calico-system f3cc0004-b4c7-442a-9dbd-e59619745141 878 0 2025-10-13 05:56:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65bdc5dc55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-65bdc5dc55-n46kc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calice3f60532e8 [] [] }} ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-" Oct 13 05:56:37.535065 containerd[1563]: 2025-10-13 05:56:37.005 [INFO][3848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" Oct 13 05:56:37.535065 containerd[1563]: 2025-10-13 05:56:37.063 [INFO][3862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" HandleID="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Workload="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.064 [INFO][3862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" HandleID="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Workload="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aa360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-65bdc5dc55-n46kc", "timestamp":"2025-10-13 05:56:37.063569966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.064 [INFO][3862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.064 [INFO][3862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.064 [INFO][3862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.071 [INFO][3862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" host="localhost" Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.075 [INFO][3862] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.081 [INFO][3862] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.082 [INFO][3862] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.084 [INFO][3862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:37.535327 containerd[1563]: 2025-10-13 05:56:37.084 [INFO][3862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" host="localhost" Oct 13 05:56:37.535575 containerd[1563]: 2025-10-13 05:56:37.085 [INFO][3862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091 Oct 13 05:56:37.535575 containerd[1563]: 2025-10-13 05:56:37.460 [INFO][3862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" host="localhost" Oct 13 05:56:37.535575 containerd[1563]: 2025-10-13 05:56:37.510 [INFO][3862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" host="localhost" Oct 13 05:56:37.535575 containerd[1563]: 2025-10-13 05:56:37.510 [INFO][3862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" host="localhost" Oct 13 05:56:37.535575 containerd[1563]: 2025-10-13 05:56:37.510 [INFO][3862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:37.535575 containerd[1563]: 2025-10-13 05:56:37.510 [INFO][3862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" HandleID="k8s-pod-network.994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Workload="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" Oct 13 05:56:37.535694 containerd[1563]: 2025-10-13 05:56:37.514 [INFO][3848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65bdc5dc55--n46kc-eth0", GenerateName:"whisker-65bdc5dc55-", Namespace:"calico-system", SelfLink:"", UID:"f3cc0004-b4c7-442a-9dbd-e59619745141", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bdc5dc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-65bdc5dc55-n46kc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calice3f60532e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:37.535694 containerd[1563]: 2025-10-13 05:56:37.514 [INFO][3848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" Oct 13 05:56:37.535767 containerd[1563]: 2025-10-13 05:56:37.514 [INFO][3848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice3f60532e8 ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" Oct 13 05:56:37.535767 containerd[1563]: 2025-10-13 05:56:37.521 [INFO][3848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" Oct 13 05:56:37.535805 containerd[1563]: 2025-10-13 05:56:37.521 [INFO][3848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65bdc5dc55--n46kc-eth0", GenerateName:"whisker-65bdc5dc55-", Namespace:"calico-system", SelfLink:"", UID:"f3cc0004-b4c7-442a-9dbd-e59619745141", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bdc5dc55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091", Pod:"whisker-65bdc5dc55-n46kc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calice3f60532e8", MAC:"62:30:9e:16:11:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:37.535852 containerd[1563]: 2025-10-13 05:56:37.531 [INFO][3848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" Namespace="calico-system" Pod="whisker-65bdc5dc55-n46kc" WorkloadEndpoint="localhost-k8s-whisker--65bdc5dc55--n46kc-eth0" Oct 13 05:56:37.618652 kubelet[2704]: I1013 05:56:37.618615 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:37.619044 kubelet[2704]: E1013 05:56:37.618988 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:37.826114 kubelet[2704]: I1013 05:56:37.825996 2704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc401f8-87da-4fda-9d7a-e345ca9bee74" path="/var/lib/kubelet/pods/4bc401f8-87da-4fda-9d7a-e345ca9bee74/volumes" Oct 13 05:56:38.041455 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:55440.service - OpenSSH per-connection server daemon (10.0.0.1:55440). Oct 13 05:56:38.106891 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 55440 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:56:38.107241 containerd[1563]: time="2025-10-13T05:56:38.107173901Z" level=info msg="connecting to shim 994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091" address="unix:///run/containerd/s/1edc2613c2f24f7e0118cddf534dfa043bd606f25a15fea2119e06b993fce0eb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:38.107424 sshd-session[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:38.114527 systemd-logind[1545]: New session 8 of user core. Oct 13 05:56:38.123455 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:56:38.143471 systemd[1]: Started cri-containerd-994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091.scope - libcontainer container 994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091. Oct 13 05:56:38.147061 containerd[1563]: time="2025-10-13T05:56:38.147027719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b\" id:\"97a9e839151da91a55e6a34f6ba94f8164eae7fff869917eeb07b882b122b297\" pid:4004 exit_status:1 exited_at:{seconds:1760334998 nanos:146588943}" Oct 13 05:56:38.157832 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:38.192456 containerd[1563]: time="2025-10-13T05:56:38.192417627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bdc5dc55-n46kc,Uid:f3cc0004-b4c7-442a-9dbd-e59619745141,Namespace:calico-system,Attempt:0,} returns sandbox id \"994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091\"" Oct 13 05:56:38.194133 containerd[1563]: time="2025-10-13T05:56:38.194105953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:56:38.256620 sshd[4042]: Connection closed by 10.0.0.1 port 55440 Oct 13 05:56:38.256913 sshd-session[3984]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:38.261260 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:55440.service: Deactivated successfully. Oct 13 05:56:38.263248 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:56:38.263992 systemd-logind[1545]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:56:38.265135 systemd-logind[1545]: Removed session 8. Oct 13 05:56:38.615759 kubelet[2704]: E1013 05:56:38.615717 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:38.707493 containerd[1563]: time="2025-10-13T05:56:38.707452707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b\" id:\"be0494686f09188cc95a498122d04b28af12bdd98f3eaa30867f726a1cb61307\" pid:4136 exit_status:1 exited_at:{seconds:1760334998 nanos:707077190}" Oct 13 05:56:38.824630 containerd[1563]: time="2025-10-13T05:56:38.824383450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7qjqx,Uid:a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:38.935288 systemd-networkd[1475]: cali48c40ef61d0: Link UP Oct 13 05:56:38.936996 systemd-networkd[1475]: cali48c40ef61d0: Gained carrier Oct 13 05:56:38.955432 containerd[1563]: 2025-10-13 05:56:38.874 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--7qjqx-eth0 goldmane-54d579b49d- calico-system a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525 808 0 2025-10-13 05:56:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-7qjqx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali48c40ef61d0 [] [] }} ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-" Oct 13 05:56:38.955432 containerd[1563]: 2025-10-13 05:56:38.874 [INFO][4161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" Oct 13 05:56:38.955432 containerd[1563]: 2025-10-13 05:56:38.901 [INFO][4176] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" HandleID="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Workload="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.902 [INFO][4176] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" HandleID="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Workload="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003291e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-7qjqx", "timestamp":"2025-10-13 05:56:38.901842486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.902 [INFO][4176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.902 [INFO][4176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.902 [INFO][4176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.909 [INFO][4176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" host="localhost" Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.913 [INFO][4176] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.917 [INFO][4176] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.918 [INFO][4176] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.920 [INFO][4176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:38.955620 containerd[1563]: 2025-10-13 05:56:38.920 [INFO][4176] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" host="localhost" Oct 13 05:56:38.955840 containerd[1563]: 2025-10-13 05:56:38.921 [INFO][4176] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8 Oct 13 05:56:38.955840 containerd[1563]: 2025-10-13 05:56:38.925 [INFO][4176] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" host="localhost" Oct 13 05:56:38.955840 containerd[1563]: 2025-10-13 05:56:38.929 [INFO][4176] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" host="localhost" Oct 13 05:56:38.955840 containerd[1563]: 2025-10-13 05:56:38.929 [INFO][4176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" host="localhost" Oct 13 05:56:38.955840 containerd[1563]: 2025-10-13 05:56:38.929 [INFO][4176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:38.955840 containerd[1563]: 2025-10-13 05:56:38.929 [INFO][4176] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" HandleID="k8s-pod-network.9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Workload="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" Oct 13 05:56:38.955966 containerd[1563]: 2025-10-13 05:56:38.933 [INFO][4161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7qjqx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-7qjqx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali48c40ef61d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:38.955966 containerd[1563]: 2025-10-13 05:56:38.933 [INFO][4161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" Oct 13 05:56:38.956037 containerd[1563]: 2025-10-13 05:56:38.933 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48c40ef61d0 ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" Oct 13 05:56:38.956037 containerd[1563]: 2025-10-13 05:56:38.937 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" Oct 13 05:56:38.956087 containerd[1563]: 2025-10-13 05:56:38.937 [INFO][4161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--7qjqx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8", Pod:"goldmane-54d579b49d-7qjqx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali48c40ef61d0", MAC:"56:67:93:34:17:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:38.956136 containerd[1563]: 2025-10-13 05:56:38.950 [INFO][4161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" Namespace="calico-system" Pod="goldmane-54d579b49d-7qjqx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--7qjqx-eth0" Oct 13 05:56:38.961210 systemd-networkd[1475]: vxlan.calico: Link UP Oct 13 05:56:38.961221 systemd-networkd[1475]: vxlan.calico: Gained carrier Oct 13 05:56:39.001714 containerd[1563]: time="2025-10-13T05:56:39.001672857Z" level=info msg="connecting to shim 9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8" address="unix:///run/containerd/s/a8b7c3ebe58af2a10ec9e25535e6c1821dd3a5655f7a224cfb9f07835f99a382" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:39.033477 systemd[1]: Started cri-containerd-9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8.scope - libcontainer container 9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8. Oct 13 05:56:39.048832 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:39.084958 containerd[1563]: time="2025-10-13T05:56:39.084915890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-7qjqx,Uid:a94f0e1d-5cb5-4ff7-b69a-7e3ca45e6525,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8\"" Oct 13 05:56:39.232047 systemd-networkd[1475]: calice3f60532e8: Gained IPv6LL Oct 13 05:56:39.824205 containerd[1563]: time="2025-10-13T05:56:39.824150947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q48s,Uid:5abc9329-79bf-4376-a56b-5b3a9919ac87,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:39.916604 systemd-networkd[1475]: caliaa53a3b9462: Link UP Oct 13 05:56:39.917415 systemd-networkd[1475]: caliaa53a3b9462: Gained carrier Oct 13 05:56:39.931381 containerd[1563]: 2025-10-13 05:56:39.856 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6q48s-eth0 csi-node-driver- calico-system 5abc9329-79bf-4376-a56b-5b3a9919ac87 690 0 2025-10-13 05:56:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6q48s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaa53a3b9462 [] [] }} ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-" Oct 13 05:56:39.931381 containerd[1563]: 2025-10-13 05:56:39.856 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-eth0" Oct 13 05:56:39.931381 containerd[1563]: 2025-10-13 05:56:39.883 [INFO][4328] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" HandleID="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Workload="localhost-k8s-csi--node--driver--6q48s-eth0" Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.883 [INFO][4328] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" HandleID="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Workload="localhost-k8s-csi--node--driver--6q48s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6q48s", "timestamp":"2025-10-13 05:56:39.883253375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.883 [INFO][4328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.883 [INFO][4328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.883 [INFO][4328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.890 [INFO][4328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" host="localhost" Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.894 [INFO][4328] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.898 [INFO][4328] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.899 [INFO][4328] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.901 [INFO][4328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:39.931570 containerd[1563]: 2025-10-13 05:56:39.901 [INFO][4328] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" host="localhost" Oct 13 05:56:39.931785 containerd[1563]: 2025-10-13 05:56:39.902 [INFO][4328] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a Oct 13 05:56:39.931785 containerd[1563]: 2025-10-13 05:56:39.905 [INFO][4328] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" host="localhost" Oct 13 05:56:39.931785 containerd[1563]: 2025-10-13 05:56:39.910 [INFO][4328] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" host="localhost" Oct 13 05:56:39.931785 containerd[1563]: 2025-10-13 05:56:39.910 [INFO][4328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" host="localhost" Oct 13 05:56:39.931785 containerd[1563]: 2025-10-13 05:56:39.910 [INFO][4328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:39.931785 containerd[1563]: 2025-10-13 05:56:39.910 [INFO][4328] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" HandleID="k8s-pod-network.f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Workload="localhost-k8s-csi--node--driver--6q48s-eth0" Oct 13 05:56:39.931991 containerd[1563]: 2025-10-13 05:56:39.914 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6q48s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5abc9329-79bf-4376-a56b-5b3a9919ac87", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6q48s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa53a3b9462", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:39.932047 containerd[1563]: 2025-10-13 05:56:39.914 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-eth0" Oct 13 05:56:39.932047 containerd[1563]: 2025-10-13 05:56:39.914 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa53a3b9462 ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-eth0" Oct 13 05:56:39.932047 containerd[1563]: 2025-10-13 05:56:39.916 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-eth0" Oct 13 05:56:39.932111 containerd[1563]: 2025-10-13 05:56:39.917 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6q48s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5abc9329-79bf-4376-a56b-5b3a9919ac87", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a", Pod:"csi-node-driver-6q48s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa53a3b9462", MAC:"7e:e8:c0:20:56:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:39.932160 containerd[1563]: 2025-10-13 05:56:39.927 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" Namespace="calico-system" Pod="csi-node-driver-6q48s" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q48s-eth0" Oct 13 05:56:39.972758 containerd[1563]: time="2025-10-13T05:56:39.972686489Z" level=info msg="connecting to shim f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a" address="unix:///run/containerd/s/4911261c4819b9dea65d5b1b5296489c9453ddf471d61e0547ab3d6727cdb604" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:40.007645 systemd[1]: Started cri-containerd-f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a.scope - libcontainer container f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a. Oct 13 05:56:40.021209 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:40.035323 containerd[1563]: time="2025-10-13T05:56:40.035226881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q48s,Uid:5abc9329-79bf-4376-a56b-5b3a9919ac87,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a\"" Oct 13 05:56:40.068541 containerd[1563]: time="2025-10-13T05:56:40.068512661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:40.069318 containerd[1563]: time="2025-10-13T05:56:40.069272831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:56:40.070510 containerd[1563]: time="2025-10-13T05:56:40.070480381Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:40.072479 containerd[1563]: time="2025-10-13T05:56:40.072436058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:40.072960 containerd[1563]: time="2025-10-13T05:56:40.072931169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.878797625s" Oct 13 05:56:40.073007 containerd[1563]: time="2025-10-13T05:56:40.072964582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:56:40.074681 containerd[1563]: time="2025-10-13T05:56:40.074612760Z" level=info msg="CreateContainer within sandbox \"994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:56:40.082250 containerd[1563]: time="2025-10-13T05:56:40.082205245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:56:40.090359 containerd[1563]: time="2025-10-13T05:56:40.088115106Z" level=info msg="Container b9c4d129dfb6429dfa91650a766972858a00d8235feab8b39f292729ed5c00a3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:40.124703 containerd[1563]: time="2025-10-13T05:56:40.124665324Z" level=info msg="CreateContainer within sandbox \"994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b9c4d129dfb6429dfa91650a766972858a00d8235feab8b39f292729ed5c00a3\"" Oct 13 05:56:40.125369 containerd[1563]: time="2025-10-13T05:56:40.125143463Z" level=info msg="StartContainer for \"b9c4d129dfb6429dfa91650a766972858a00d8235feab8b39f292729ed5c00a3\"" Oct 13 05:56:40.126236 containerd[1563]: time="2025-10-13T05:56:40.126210539Z" level=info msg="connecting to shim b9c4d129dfb6429dfa91650a766972858a00d8235feab8b39f292729ed5c00a3" address="unix:///run/containerd/s/1edc2613c2f24f7e0118cddf534dfa043bd606f25a15fea2119e06b993fce0eb" protocol=ttrpc version=3 Oct 13 05:56:40.155450 systemd[1]: Started cri-containerd-b9c4d129dfb6429dfa91650a766972858a00d8235feab8b39f292729ed5c00a3.scope - libcontainer container b9c4d129dfb6429dfa91650a766972858a00d8235feab8b39f292729ed5c00a3. Oct 13 05:56:40.225760 containerd[1563]: time="2025-10-13T05:56:40.225709176Z" level=info msg="StartContainer for \"b9c4d129dfb6429dfa91650a766972858a00d8235feab8b39f292729ed5c00a3\" returns successfully" Oct 13 05:56:40.318514 systemd-networkd[1475]: vxlan.calico: Gained IPv6LL Oct 13 05:56:40.823217 kubelet[2704]: E1013 05:56:40.823184 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:40.823676 kubelet[2704]: E1013 05:56:40.823396 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:40.823707 containerd[1563]: time="2025-10-13T05:56:40.823488952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsm6v,Uid:106ebfaf-472b-4ba4-a2aa-42f20bb87a67,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:40.823787 containerd[1563]: time="2025-10-13T05:56:40.823764831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95fxg,Uid:663831a9-2ae8-4c19-a75a-c1031b55c603,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:40.932221 systemd-networkd[1475]: cali7d0ed16bbdd: Link UP Oct 13 05:56:40.932905 systemd-networkd[1475]: cali7d0ed16bbdd: Gained carrier Oct 13 05:56:40.948802 containerd[1563]: 2025-10-13 05:56:40.864 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--95fxg-eth0 coredns-668d6bf9bc- kube-system 663831a9-2ae8-4c19-a75a-c1031b55c603 798 0 2025-10-13 05:56:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-95fxg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d0ed16bbdd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-" Oct 13 05:56:40.948802 containerd[1563]: 2025-10-13 05:56:40.864 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" Oct 13 05:56:40.948802 containerd[1563]: 2025-10-13 05:56:40.891 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" HandleID="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Workload="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.891 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" HandleID="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Workload="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b8550), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-95fxg", "timestamp":"2025-10-13 05:56:40.891012543 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.891 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.891 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.891 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.897 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" host="localhost" Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.900 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.903 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.906 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.908 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:40.949255 containerd[1563]: 2025-10-13 05:56:40.908 [INFO][4460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" host="localhost" Oct 13 05:56:40.949513 containerd[1563]: 2025-10-13 05:56:40.910 [INFO][4460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201 Oct 13 05:56:40.949513 containerd[1563]: 2025-10-13 05:56:40.914 [INFO][4460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" host="localhost" Oct 13 05:56:40.949513 containerd[1563]: 2025-10-13 05:56:40.918 [INFO][4460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" host="localhost" Oct 13 05:56:40.949513 containerd[1563]: 2025-10-13 05:56:40.918 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" host="localhost" Oct 13 05:56:40.949513 containerd[1563]: 2025-10-13 05:56:40.919 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:40.949513 containerd[1563]: 2025-10-13 05:56:40.919 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" HandleID="k8s-pod-network.f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Workload="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" Oct 13 05:56:40.949638 containerd[1563]: 2025-10-13 05:56:40.925 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--95fxg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"663831a9-2ae8-4c19-a75a-c1031b55c603", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-95fxg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d0ed16bbdd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:40.949711 containerd[1563]: 2025-10-13 05:56:40.925 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" Oct 13 05:56:40.949711 containerd[1563]: 2025-10-13 05:56:40.925 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d0ed16bbdd ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" Oct 13 05:56:40.949711 containerd[1563]: 2025-10-13 05:56:40.935 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" Oct 13 05:56:40.949774 containerd[1563]: 2025-10-13 05:56:40.936 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--95fxg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"663831a9-2ae8-4c19-a75a-c1031b55c603", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201", Pod:"coredns-668d6bf9bc-95fxg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d0ed16bbdd", MAC:"fe:d7:8d:8f:ec:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:40.949774 containerd[1563]: 2025-10-13 05:56:40.945 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" Namespace="kube-system" Pod="coredns-668d6bf9bc-95fxg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--95fxg-eth0" Oct 13 05:56:40.959853 systemd-networkd[1475]: cali48c40ef61d0: Gained IPv6LL Oct 13 05:56:40.973285 containerd[1563]: time="2025-10-13T05:56:40.973250559Z" level=info msg="connecting to shim f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201" address="unix:///run/containerd/s/1d5bc646246437edab2faf704964887c135b3005ee036a8ee0425f780e5f2af8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:40.999575 systemd[1]: Started cri-containerd-f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201.scope - libcontainer container f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201. Oct 13 05:56:41.014177 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:41.030580 systemd-networkd[1475]: calif5240c50f2e: Link UP Oct 13 05:56:41.031399 systemd-networkd[1475]: calif5240c50f2e: Gained carrier Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.868 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0 coredns-668d6bf9bc- kube-system 106ebfaf-472b-4ba4-a2aa-42f20bb87a67 806 0 2025-10-13 05:56:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wsm6v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif5240c50f2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.868 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.903 [INFO][4466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" HandleID="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Workload="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.903 [INFO][4466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" HandleID="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Workload="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000582810), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wsm6v", "timestamp":"2025-10-13 05:56:40.903564583 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.903 [INFO][4466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.918 [INFO][4466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.919 [INFO][4466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:40.998 [INFO][4466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.003 [INFO][4466] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.009 [INFO][4466] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.010 [INFO][4466] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.012 [INFO][4466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.012 [INFO][4466] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.013 [INFO][4466] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52 Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.016 [INFO][4466] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.022 [INFO][4466] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.022 [INFO][4466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" host="localhost" Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.022 [INFO][4466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:41.044824 containerd[1563]: 2025-10-13 05:56:41.023 [INFO][4466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" HandleID="k8s-pod-network.118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Workload="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" Oct 13 05:56:41.045364 containerd[1563]: 2025-10-13 05:56:41.027 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"106ebfaf-472b-4ba4-a2aa-42f20bb87a67", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wsm6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5240c50f2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:41.045364 containerd[1563]: 2025-10-13 05:56:41.027 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" Oct 13 05:56:41.045364 containerd[1563]: 2025-10-13 05:56:41.027 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5240c50f2e ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" Oct 13 05:56:41.045364 containerd[1563]: 2025-10-13 05:56:41.030 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" Oct 13 05:56:41.045364 containerd[1563]: 2025-10-13 05:56:41.031 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"106ebfaf-472b-4ba4-a2aa-42f20bb87a67", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52", Pod:"coredns-668d6bf9bc-wsm6v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif5240c50f2e", MAC:"e2:ae:65:31:b9:1c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:41.045364 containerd[1563]: 2025-10-13 05:56:41.040 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" Namespace="kube-system" Pod="coredns-668d6bf9bc-wsm6v" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wsm6v-eth0" Oct 13 05:56:41.051747 containerd[1563]: time="2025-10-13T05:56:41.051708741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95fxg,Uid:663831a9-2ae8-4c19-a75a-c1031b55c603,Namespace:kube-system,Attempt:0,} returns sandbox id \"f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201\"" Oct 13 05:56:41.052632 kubelet[2704]: E1013 05:56:41.052590 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:41.055497 containerd[1563]: time="2025-10-13T05:56:41.055080661Z" level=info msg="CreateContainer within sandbox \"f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:56:41.072129 containerd[1563]: time="2025-10-13T05:56:41.072054427Z" level=info msg="connecting to shim 118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52" address="unix:///run/containerd/s/eec02dbbbb7eadc3492b038066d317150de7cc6925cec167e72845ae1e701fee" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:41.077539 containerd[1563]: time="2025-10-13T05:56:41.077361663Z" level=info msg="Container e2f2e261ee5fe25ceb4bb14255194fe5dc811c1505804f903fdb8e64f2450427: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:41.084814 containerd[1563]: time="2025-10-13T05:56:41.084794035Z" level=info msg="CreateContainer within sandbox \"f894d434b77170b56ac668f658b8d3bb588f0265311fe8897df556291832b201\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2f2e261ee5fe25ceb4bb14255194fe5dc811c1505804f903fdb8e64f2450427\"" Oct 13 05:56:41.085423 containerd[1563]: time="2025-10-13T05:56:41.085394344Z" level=info msg="StartContainer for \"e2f2e261ee5fe25ceb4bb14255194fe5dc811c1505804f903fdb8e64f2450427\"" Oct 13 05:56:41.086253 containerd[1563]: time="2025-10-13T05:56:41.086228492Z" level=info msg="connecting to shim e2f2e261ee5fe25ceb4bb14255194fe5dc811c1505804f903fdb8e64f2450427" address="unix:///run/containerd/s/1d5bc646246437edab2faf704964887c135b3005ee036a8ee0425f780e5f2af8" protocol=ttrpc version=3 Oct 13 05:56:41.106464 systemd[1]: Started cri-containerd-118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52.scope - libcontainer container 118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52. Oct 13 05:56:41.109867 systemd[1]: Started cri-containerd-e2f2e261ee5fe25ceb4bb14255194fe5dc811c1505804f903fdb8e64f2450427.scope - libcontainer container e2f2e261ee5fe25ceb4bb14255194fe5dc811c1505804f903fdb8e64f2450427. Oct 13 05:56:41.121580 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:41.150806 containerd[1563]: time="2025-10-13T05:56:41.150745189Z" level=info msg="StartContainer for \"e2f2e261ee5fe25ceb4bb14255194fe5dc811c1505804f903fdb8e64f2450427\" returns successfully" Oct 13 05:56:41.162168 containerd[1563]: time="2025-10-13T05:56:41.162121644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wsm6v,Uid:106ebfaf-472b-4ba4-a2aa-42f20bb87a67,Namespace:kube-system,Attempt:0,} returns sandbox id \"118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52\"" Oct 13 05:56:41.163515 kubelet[2704]: E1013 05:56:41.163486 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:41.166358 containerd[1563]: time="2025-10-13T05:56:41.166314575Z" level=info msg="CreateContainer within sandbox \"118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:56:41.178490 containerd[1563]: time="2025-10-13T05:56:41.178452373Z" level=info msg="Container 04a8ae9f4a7be0f801e7af714d62703a580598a6141b1106826511c62f0e4508: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:41.186302 containerd[1563]: time="2025-10-13T05:56:41.186250701Z" level=info msg="CreateContainer within sandbox \"118a60178a328a919d6a8b2621bb90da416f995b26fa7b1299551e444b22dc52\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04a8ae9f4a7be0f801e7af714d62703a580598a6141b1106826511c62f0e4508\"" Oct 13 05:56:41.187297 containerd[1563]: time="2025-10-13T05:56:41.187257344Z" level=info msg="StartContainer for \"04a8ae9f4a7be0f801e7af714d62703a580598a6141b1106826511c62f0e4508\"" Oct 13 05:56:41.188294 containerd[1563]: time="2025-10-13T05:56:41.188250470Z" level=info msg="connecting to shim 04a8ae9f4a7be0f801e7af714d62703a580598a6141b1106826511c62f0e4508" address="unix:///run/containerd/s/eec02dbbbb7eadc3492b038066d317150de7cc6925cec167e72845ae1e701fee" protocol=ttrpc version=3 Oct 13 05:56:41.220530 systemd[1]: Started cri-containerd-04a8ae9f4a7be0f801e7af714d62703a580598a6141b1106826511c62f0e4508.scope - libcontainer container 04a8ae9f4a7be0f801e7af714d62703a580598a6141b1106826511c62f0e4508. Oct 13 05:56:41.260809 containerd[1563]: time="2025-10-13T05:56:41.260775422Z" level=info msg="StartContainer for \"04a8ae9f4a7be0f801e7af714d62703a580598a6141b1106826511c62f0e4508\" returns successfully" Oct 13 05:56:41.534500 systemd-networkd[1475]: caliaa53a3b9462: Gained IPv6LL Oct 13 05:56:41.626172 kubelet[2704]: E1013 05:56:41.626121 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:41.629662 kubelet[2704]: E1013 05:56:41.629628 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:41.647356 kubelet[2704]: I1013 05:56:41.646435 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wsm6v" podStartSLOduration=36.646417644 podStartE2EDuration="36.646417644s" podCreationTimestamp="2025-10-13 05:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:41.636242106 +0000 UTC m=+41.897745545" watchObservedRunningTime="2025-10-13 05:56:41.646417644 +0000 UTC m=+41.907921073" Oct 13 05:56:41.662004 kubelet[2704]: I1013 05:56:41.661718 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-95fxg" podStartSLOduration=36.661703949 podStartE2EDuration="36.661703949s" podCreationTimestamp="2025-10-13 05:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:41.647262833 +0000 UTC m=+41.908766262" watchObservedRunningTime="2025-10-13 05:56:41.661703949 +0000 UTC m=+41.923207368" Oct 13 05:56:41.824954 containerd[1563]: time="2025-10-13T05:56:41.824664740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4d94c744-wwchw,Uid:79e5bdb8-a9fd-46f0-813e-b805ce528b77,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:41.824954 containerd[1563]: time="2025-10-13T05:56:41.824697662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-h24nw,Uid:aa70c35a-019c-4f74-8ce2-7de70ff78eb2,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:41.825544 containerd[1563]: time="2025-10-13T05:56:41.825242846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-pnn9d,Uid:f3af9472-5fc6-435b-aad6-baa386269104,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:42.018817 systemd-networkd[1475]: cali151d31c1d9b: Link UP Oct 13 05:56:42.019624 systemd-networkd[1475]: cali151d31c1d9b: Gained carrier Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.944 [INFO][4665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0 calico-kube-controllers-5d4d94c744- calico-system 79e5bdb8-a9fd-46f0-813e-b805ce528b77 805 0 2025-10-13 05:56:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d4d94c744 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d4d94c744-wwchw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali151d31c1d9b [] [] }} ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.944 [INFO][4665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.981 [INFO][4719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" HandleID="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Workload="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.981 [INFO][4719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" HandleID="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Workload="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d4d94c744-wwchw", "timestamp":"2025-10-13 05:56:41.981275654 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.981 [INFO][4719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.981 [INFO][4719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.981 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.989 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.994 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:41.997 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.000 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.003 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.003 [INFO][4719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.005 [INFO][4719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1 Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.008 [INFO][4719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.013 [INFO][4719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.013 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" host="localhost" Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.013 [INFO][4719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:42.062716 containerd[1563]: 2025-10-13 05:56:42.013 [INFO][4719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" HandleID="k8s-pod-network.ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Workload="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" Oct 13 05:56:42.063541 containerd[1563]: 2025-10-13 05:56:42.016 [INFO][4665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0", GenerateName:"calico-kube-controllers-5d4d94c744-", Namespace:"calico-system", SelfLink:"", UID:"79e5bdb8-a9fd-46f0-813e-b805ce528b77", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d4d94c744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d4d94c744-wwchw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali151d31c1d9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.063541 containerd[1563]: 2025-10-13 05:56:42.016 [INFO][4665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" Oct 13 05:56:42.063541 containerd[1563]: 2025-10-13 05:56:42.016 [INFO][4665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali151d31c1d9b ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" Oct 13 05:56:42.063541 containerd[1563]: 2025-10-13 05:56:42.019 [INFO][4665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" Oct 13 05:56:42.063541 containerd[1563]: 2025-10-13 05:56:42.020 [INFO][4665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0", GenerateName:"calico-kube-controllers-5d4d94c744-", Namespace:"calico-system", SelfLink:"", UID:"79e5bdb8-a9fd-46f0-813e-b805ce528b77", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d4d94c744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1", Pod:"calico-kube-controllers-5d4d94c744-wwchw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali151d31c1d9b", MAC:"5a:42:bc:84:a1:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.063541 containerd[1563]: 2025-10-13 05:56:42.058 [INFO][4665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" Namespace="calico-system" Pod="calico-kube-controllers-5d4d94c744-wwchw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d4d94c744--wwchw-eth0" Oct 13 05:56:42.085395 containerd[1563]: time="2025-10-13T05:56:42.085235249Z" level=info msg="connecting to shim ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1" address="unix:///run/containerd/s/817b56bf739d2201981752542ffa0eb62200bd9b01fb501988f056a242a5d845" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:42.124468 systemd[1]: Started cri-containerd-ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1.scope - libcontainer container ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1. Oct 13 05:56:42.130377 systemd-networkd[1475]: calic909fa58f18: Link UP Oct 13 05:56:42.130588 systemd-networkd[1475]: calic909fa58f18: Gained carrier Oct 13 05:56:42.148354 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:41.939 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0 calico-apiserver-7cdfd954d7- calico-apiserver aa70c35a-019c-4f74-8ce2-7de70ff78eb2 801 0 2025-10-13 05:56:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cdfd954d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cdfd954d7-h24nw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic909fa58f18 [] [] }} ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:41.939 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.007 [INFO][4711] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" HandleID="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Workload="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.007 [INFO][4711] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" HandleID="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Workload="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cdfd954d7-h24nw", "timestamp":"2025-10-13 05:56:42.007131014 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.007 [INFO][4711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.013 [INFO][4711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.013 [INFO][4711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.090 [INFO][4711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.098 [INFO][4711] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.102 [INFO][4711] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.104 [INFO][4711] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.106 [INFO][4711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.106 [INFO][4711] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.107 [INFO][4711] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295 Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.111 [INFO][4711] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.117 [INFO][4711] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.117 [INFO][4711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" host="localhost" Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.117 [INFO][4711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:42.148896 containerd[1563]: 2025-10-13 05:56:42.117 [INFO][4711] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" HandleID="k8s-pod-network.215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Workload="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" Oct 13 05:56:42.149408 containerd[1563]: 2025-10-13 05:56:42.124 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0", GenerateName:"calico-apiserver-7cdfd954d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa70c35a-019c-4f74-8ce2-7de70ff78eb2", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdfd954d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cdfd954d7-h24nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic909fa58f18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.149408 containerd[1563]: 2025-10-13 05:56:42.126 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" Oct 13 05:56:42.149408 containerd[1563]: 2025-10-13 05:56:42.126 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic909fa58f18 ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" Oct 13 05:56:42.149408 containerd[1563]: 2025-10-13 05:56:42.130 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" Oct 13 05:56:42.149408 containerd[1563]: 2025-10-13 05:56:42.133 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0", GenerateName:"calico-apiserver-7cdfd954d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa70c35a-019c-4f74-8ce2-7de70ff78eb2", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdfd954d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295", Pod:"calico-apiserver-7cdfd954d7-h24nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic909fa58f18", MAC:"56:9f:94:d6:ca:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.149408 containerd[1563]: 2025-10-13 05:56:42.144 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-h24nw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--h24nw-eth0" Oct 13 05:56:42.255187 containerd[1563]: time="2025-10-13T05:56:42.255036438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d4d94c744-wwchw,Uid:79e5bdb8-a9fd-46f0-813e-b805ce528b77,Namespace:calico-system,Attempt:0,} returns sandbox id \"ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1\"" Oct 13 05:56:42.260776 systemd-networkd[1475]: cali781c64c7e00: Link UP Oct 13 05:56:42.264460 systemd-networkd[1475]: cali781c64c7e00: Gained carrier Oct 13 05:56:42.273655 containerd[1563]: time="2025-10-13T05:56:42.273611178Z" level=info msg="connecting to shim 215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295" address="unix:///run/containerd/s/b4c737a7ef59a3679062c1195267ff8e2d68444b9757966b1e69c5b46376fa2e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:41.968 [INFO][4683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0 calico-apiserver-7cdfd954d7- calico-apiserver f3af9472-5fc6-435b-aad6-baa386269104 804 0 2025-10-13 05:56:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cdfd954d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cdfd954d7-pnn9d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali781c64c7e00 [] [] }} ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:41.968 [INFO][4683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.007 [INFO][4727] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" HandleID="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Workload="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.007 [INFO][4727] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" HandleID="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Workload="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cdfd954d7-pnn9d", "timestamp":"2025-10-13 05:56:42.007098653 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.007 [INFO][4727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.118 [INFO][4727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.119 [INFO][4727] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.190 [INFO][4727] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.199 [INFO][4727] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.203 [INFO][4727] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.204 [INFO][4727] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.206 [INFO][4727] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.206 [INFO][4727] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.207 [INFO][4727] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107 Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.215 [INFO][4727] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.254 [INFO][4727] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.254 [INFO][4727] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" host="localhost" Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.254 [INFO][4727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:42.284820 containerd[1563]: 2025-10-13 05:56:42.254 [INFO][4727] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" HandleID="k8s-pod-network.0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Workload="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" Oct 13 05:56:42.285313 containerd[1563]: 2025-10-13 05:56:42.258 [INFO][4683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0", GenerateName:"calico-apiserver-7cdfd954d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3af9472-5fc6-435b-aad6-baa386269104", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdfd954d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cdfd954d7-pnn9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali781c64c7e00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.285313 containerd[1563]: 2025-10-13 05:56:42.258 [INFO][4683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" Oct 13 05:56:42.285313 containerd[1563]: 2025-10-13 05:56:42.258 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali781c64c7e00 ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" Oct 13 05:56:42.285313 containerd[1563]: 2025-10-13 05:56:42.265 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" Oct 13 05:56:42.285313 containerd[1563]: 2025-10-13 05:56:42.265 [INFO][4683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0", GenerateName:"calico-apiserver-7cdfd954d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"f3af9472-5fc6-435b-aad6-baa386269104", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cdfd954d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107", Pod:"calico-apiserver-7cdfd954d7-pnn9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali781c64c7e00", MAC:"ba:17:b4:4c:19:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.285313 containerd[1563]: 2025-10-13 05:56:42.279 [INFO][4683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" Namespace="calico-apiserver" Pod="calico-apiserver-7cdfd954d7-pnn9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cdfd954d7--pnn9d-eth0" Oct 13 05:56:42.309455 systemd[1]: Started cri-containerd-215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295.scope - libcontainer container 215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295. Oct 13 05:56:42.317069 containerd[1563]: time="2025-10-13T05:56:42.317005207Z" level=info msg="connecting to shim 0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107" address="unix:///run/containerd/s/d85014fcc5e069ce497f90910f9d013de2d5e153054a80cef1d5aaaaa94b76f1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:42.326432 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:42.337457 systemd[1]: Started cri-containerd-0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107.scope - libcontainer container 0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107. Oct 13 05:56:42.356570 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:42.366681 containerd[1563]: time="2025-10-13T05:56:42.366638492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-h24nw,Uid:aa70c35a-019c-4f74-8ce2-7de70ff78eb2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295\"" Oct 13 05:56:42.392045 containerd[1563]: time="2025-10-13T05:56:42.392007142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cdfd954d7-pnn9d,Uid:f3af9472-5fc6-435b-aad6-baa386269104,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107\"" Oct 13 05:56:42.558477 systemd-networkd[1475]: cali7d0ed16bbdd: Gained IPv6LL Oct 13 05:56:42.651637 kubelet[2704]: E1013 05:56:42.651540 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:42.652092 kubelet[2704]: E1013 05:56:42.651978 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:42.815652 systemd-networkd[1475]: calif5240c50f2e: Gained IPv6LL Oct 13 05:56:43.092342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818457909.mount: Deactivated successfully. Oct 13 05:56:43.275949 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:43780.service - OpenSSH per-connection server daemon (10.0.0.1:43780). Oct 13 05:56:43.531182 containerd[1563]: time="2025-10-13T05:56:43.530837835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:43.533010 containerd[1563]: time="2025-10-13T05:56:43.531677712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:56:43.533747 containerd[1563]: time="2025-10-13T05:56:43.533710683Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:43.535809 containerd[1563]: time="2025-10-13T05:56:43.535767598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:43.536383 containerd[1563]: time="2025-10-13T05:56:43.536349001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.454088613s" Oct 13 05:56:43.536383 containerd[1563]: time="2025-10-13T05:56:43.536381702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:56:43.537365 containerd[1563]: time="2025-10-13T05:56:43.537345113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:56:43.538509 containerd[1563]: time="2025-10-13T05:56:43.538480907Z" level=info msg="CreateContainer within sandbox \"9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:56:43.548345 containerd[1563]: time="2025-10-13T05:56:43.546344004Z" level=info msg="Container 89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:43.555339 containerd[1563]: time="2025-10-13T05:56:43.555302519Z" level=info msg="CreateContainer within sandbox \"9b11d50b5d799d36efb09be062983b6cc248b8fabc217444aa2f46feb0a66fa8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671\"" Oct 13 05:56:43.556283 containerd[1563]: time="2025-10-13T05:56:43.556251763Z" level=info msg="StartContainer for \"89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671\"" Oct 13 05:56:43.557670 containerd[1563]: time="2025-10-13T05:56:43.557646873Z" level=info msg="connecting to shim 89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671" address="unix:///run/containerd/s/a8b7c3ebe58af2a10ec9e25535e6c1821dd3a5655f7a224cfb9f07835f99a382" protocol=ttrpc version=3 Oct 13 05:56:43.572831 sshd[4914]: Accepted publickey for core from 10.0.0.1 port 43780 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:56:43.575610 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:43.583453 systemd-networkd[1475]: calic909fa58f18: Gained IPv6LL Oct 13 05:56:43.586507 systemd[1]: Started cri-containerd-89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671.scope - libcontainer container 89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671. Oct 13 05:56:43.591194 systemd-logind[1545]: New session 9 of user core. Oct 13 05:56:43.601502 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:56:43.661602 kubelet[2704]: E1013 05:56:43.661573 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:43.663343 kubelet[2704]: E1013 05:56:43.662709 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:56:43.725316 containerd[1563]: time="2025-10-13T05:56:43.725174052Z" level=info msg="StartContainer for \"89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671\" returns successfully" Oct 13 05:56:43.798510 sshd[4940]: Connection closed by 10.0.0.1 port 43780 Oct 13 05:56:43.799940 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:43.804539 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:43780.service: Deactivated successfully. Oct 13 05:56:43.806937 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:56:43.807902 systemd-logind[1545]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:56:43.808942 systemd-logind[1545]: Removed session 9. Oct 13 05:56:43.838467 systemd-networkd[1475]: cali151d31c1d9b: Gained IPv6LL Oct 13 05:56:43.966533 systemd-networkd[1475]: cali781c64c7e00: Gained IPv6LL Oct 13 05:56:44.687184 kubelet[2704]: I1013 05:56:44.687124 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-7qjqx" podStartSLOduration=25.236304985 podStartE2EDuration="29.687110023s" podCreationTimestamp="2025-10-13 05:56:15 +0000 UTC" firstStartedPulling="2025-10-13 05:56:39.086377669 +0000 UTC m=+39.347881098" lastFinishedPulling="2025-10-13 05:56:43.537182707 +0000 UTC m=+43.798686136" observedRunningTime="2025-10-13 05:56:44.686955563 +0000 UTC m=+44.948459012" watchObservedRunningTime="2025-10-13 05:56:44.687110023 +0000 UTC m=+44.948613452" Oct 13 05:56:45.269983 containerd[1563]: time="2025-10-13T05:56:45.269934599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:45.270650 containerd[1563]: time="2025-10-13T05:56:45.270618264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:56:45.274084 containerd[1563]: time="2025-10-13T05:56:45.274050492Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:45.274763 containerd[1563]: time="2025-10-13T05:56:45.274730609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.737299224s" Oct 13 05:56:45.274763 containerd[1563]: time="2025-10-13T05:56:45.274768230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:56:45.275074 containerd[1563]: time="2025-10-13T05:56:45.275050750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:45.275967 containerd[1563]: time="2025-10-13T05:56:45.275948448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:56:45.276885 containerd[1563]: time="2025-10-13T05:56:45.276838850Z" level=info msg="CreateContainer within sandbox \"f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:56:45.286719 containerd[1563]: time="2025-10-13T05:56:45.286687863Z" level=info msg="Container 3f66f816737676e340c1e9c6f68a312f9a3b21f8df9c14078a7ef930762a3ef7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:45.295273 containerd[1563]: time="2025-10-13T05:56:45.295223379Z" level=info msg="CreateContainer within sandbox \"f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3f66f816737676e340c1e9c6f68a312f9a3b21f8df9c14078a7ef930762a3ef7\"" Oct 13 05:56:45.295730 containerd[1563]: time="2025-10-13T05:56:45.295623600Z" level=info msg="StartContainer for \"3f66f816737676e340c1e9c6f68a312f9a3b21f8df9c14078a7ef930762a3ef7\"" Oct 13 05:56:45.297009 containerd[1563]: time="2025-10-13T05:56:45.296984597Z" level=info msg="connecting to shim 3f66f816737676e340c1e9c6f68a312f9a3b21f8df9c14078a7ef930762a3ef7" address="unix:///run/containerd/s/4911261c4819b9dea65d5b1b5296489c9453ddf471d61e0547ab3d6727cdb604" protocol=ttrpc version=3 Oct 13 05:56:45.332460 systemd[1]: Started cri-containerd-3f66f816737676e340c1e9c6f68a312f9a3b21f8df9c14078a7ef930762a3ef7.scope - libcontainer container 3f66f816737676e340c1e9c6f68a312f9a3b21f8df9c14078a7ef930762a3ef7. Oct 13 05:56:45.418203 containerd[1563]: time="2025-10-13T05:56:45.418164177Z" level=info msg="StartContainer for \"3f66f816737676e340c1e9c6f68a312f9a3b21f8df9c14078a7ef930762a3ef7\" returns successfully" Oct 13 05:56:45.682063 kubelet[2704]: I1013 05:56:45.681968 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:47.836024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177466512.mount: Deactivated successfully. Oct 13 05:56:47.857046 containerd[1563]: time="2025-10-13T05:56:47.857004285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:47.857718 containerd[1563]: time="2025-10-13T05:56:47.857679092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:56:47.863658 containerd[1563]: time="2025-10-13T05:56:47.863600494Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:47.865732 containerd[1563]: time="2025-10-13T05:56:47.865699597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:47.866281 containerd[1563]: time="2025-10-13T05:56:47.866249991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.590279712s" Oct 13 05:56:47.866281 containerd[1563]: time="2025-10-13T05:56:47.866278825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:56:47.867472 containerd[1563]: time="2025-10-13T05:56:47.867434976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:56:47.868565 containerd[1563]: time="2025-10-13T05:56:47.868540973Z" level=info msg="CreateContainer within sandbox \"994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:56:47.876116 containerd[1563]: time="2025-10-13T05:56:47.876086406Z" level=info msg="Container 07a650f7da4fc94eb278b41474c7c86effa84c588bb5d023124c4ecab5901f75: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:47.885793 containerd[1563]: time="2025-10-13T05:56:47.885755947Z" level=info msg="CreateContainer within sandbox \"994b184296ae357eb4e8e45cdc0563d0b25179888ef7cbbb95df2fabd5d83091\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"07a650f7da4fc94eb278b41474c7c86effa84c588bb5d023124c4ecab5901f75\"" Oct 13 05:56:47.886253 containerd[1563]: time="2025-10-13T05:56:47.886209099Z" level=info msg="StartContainer for \"07a650f7da4fc94eb278b41474c7c86effa84c588bb5d023124c4ecab5901f75\"" Oct 13 05:56:47.888886 containerd[1563]: time="2025-10-13T05:56:47.887861612Z" level=info msg="connecting to shim 07a650f7da4fc94eb278b41474c7c86effa84c588bb5d023124c4ecab5901f75" address="unix:///run/containerd/s/1edc2613c2f24f7e0118cddf534dfa043bd606f25a15fea2119e06b993fce0eb" protocol=ttrpc version=3 Oct 13 05:56:47.936456 systemd[1]: Started cri-containerd-07a650f7da4fc94eb278b41474c7c86effa84c588bb5d023124c4ecab5901f75.scope - libcontainer container 07a650f7da4fc94eb278b41474c7c86effa84c588bb5d023124c4ecab5901f75. Oct 13 05:56:48.211845 containerd[1563]: time="2025-10-13T05:56:48.211743680Z" level=info msg="StartContainer for \"07a650f7da4fc94eb278b41474c7c86effa84c588bb5d023124c4ecab5901f75\" returns successfully" Oct 13 05:56:48.410761 kubelet[2704]: I1013 05:56:48.410719 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:48.494746 containerd[1563]: time="2025-10-13T05:56:48.494691928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671\" id:\"9360e064a629225197f9434801a735f65214b3977ec1169f81f93c194bff2058\" pid:5067 exited_at:{seconds:1760335008 nanos:494154450}" Oct 13 05:56:48.576216 containerd[1563]: time="2025-10-13T05:56:48.576174106Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671\" id:\"fc93fe9e71a0de95ff6396c278de92018b5002068705692c3c145e946f60b2d0\" pid:5092 exited_at:{seconds:1760335008 nanos:575368763}" Oct 13 05:56:48.699426 kubelet[2704]: I1013 05:56:48.699344 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65bdc5dc55-n46kc" podStartSLOduration=3.025959436 podStartE2EDuration="12.69931538s" podCreationTimestamp="2025-10-13 05:56:36 +0000 UTC" firstStartedPulling="2025-10-13 05:56:38.193931475 +0000 UTC m=+38.455434904" lastFinishedPulling="2025-10-13 05:56:47.867287419 +0000 UTC m=+48.128790848" observedRunningTime="2025-10-13 05:56:48.698741663 +0000 UTC m=+48.960245092" watchObservedRunningTime="2025-10-13 05:56:48.69931538 +0000 UTC m=+48.960818809" Oct 13 05:56:48.812023 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:43792.service - OpenSSH per-connection server daemon (10.0.0.1:43792). Oct 13 05:56:48.866493 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 43792 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:56:48.867754 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:48.871858 systemd-logind[1545]: New session 10 of user core. Oct 13 05:56:48.880486 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:56:48.998106 sshd[5113]: Connection closed by 10.0.0.1 port 43792 Oct 13 05:56:48.998471 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:49.008062 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:43792.service: Deactivated successfully. Oct 13 05:56:49.010051 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:56:49.010939 systemd-logind[1545]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:56:49.013652 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:43804.service - OpenSSH per-connection server daemon (10.0.0.1:43804). Oct 13 05:56:49.014470 systemd-logind[1545]: Removed session 10. Oct 13 05:56:49.062338 sshd[5127]: Accepted publickey for core from 10.0.0.1 port 43804 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:56:49.064065 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:49.068513 systemd-logind[1545]: New session 11 of user core. Oct 13 05:56:49.079477 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:56:49.232814 sshd[5130]: Connection closed by 10.0.0.1 port 43804 Oct 13 05:56:49.234477 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:49.251701 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:43804.service: Deactivated successfully. Oct 13 05:56:49.253943 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:56:49.259704 systemd-logind[1545]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:56:49.267371 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:43812.service - OpenSSH per-connection server daemon (10.0.0.1:43812). Oct 13 05:56:49.268934 systemd-logind[1545]: Removed session 11. Oct 13 05:56:49.322354 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 43812 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:56:49.322829 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:49.328589 systemd-logind[1545]: New session 12 of user core. Oct 13 05:56:49.337598 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:56:49.794755 sshd[5151]: Connection closed by 10.0.0.1 port 43812 Oct 13 05:56:49.795222 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:49.800499 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:43812.service: Deactivated successfully. Oct 13 05:56:49.802578 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:56:49.803313 systemd-logind[1545]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:56:49.804397 systemd-logind[1545]: Removed session 12. Oct 13 05:56:50.588496 containerd[1563]: time="2025-10-13T05:56:50.588439069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:50.589129 containerd[1563]: time="2025-10-13T05:56:50.589065736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:56:50.590380 containerd[1563]: time="2025-10-13T05:56:50.590348705Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:50.592147 containerd[1563]: time="2025-10-13T05:56:50.592112036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:50.592640 containerd[1563]: time="2025-10-13T05:56:50.592601174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.725135901s" Oct 13 05:56:50.592640 containerd[1563]: time="2025-10-13T05:56:50.592637543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:56:50.593730 containerd[1563]: time="2025-10-13T05:56:50.593653881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:56:50.600809 containerd[1563]: time="2025-10-13T05:56:50.600776675Z" level=info msg="CreateContainer within sandbox \"ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:56:50.608326 containerd[1563]: time="2025-10-13T05:56:50.608295333Z" level=info msg="Container b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:50.615365 containerd[1563]: time="2025-10-13T05:56:50.615325944Z" level=info msg="CreateContainer within sandbox \"ddd0bcd0eb99e61f7ce99f564f7b25946af462812ba7de0a3dad81c94f4083b1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d\"" Oct 13 05:56:50.616952 containerd[1563]: time="2025-10-13T05:56:50.615764978Z" level=info msg="StartContainer for \"b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d\"" Oct 13 05:56:50.616952 containerd[1563]: time="2025-10-13T05:56:50.616938682Z" level=info msg="connecting to shim b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d" address="unix:///run/containerd/s/817b56bf739d2201981752542ffa0eb62200bd9b01fb501988f056a242a5d845" protocol=ttrpc version=3 Oct 13 05:56:50.637484 systemd[1]: Started cri-containerd-b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d.scope - libcontainer container b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d. Oct 13 05:56:50.681939 containerd[1563]: time="2025-10-13T05:56:50.681900359Z" level=info msg="StartContainer for \"b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d\" returns successfully" Oct 13 05:56:50.706835 kubelet[2704]: I1013 05:56:50.706752 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d4d94c744-wwchw" podStartSLOduration=27.370264021 podStartE2EDuration="35.706737996s" podCreationTimestamp="2025-10-13 05:56:15 +0000 UTC" firstStartedPulling="2025-10-13 05:56:42.256975722 +0000 UTC m=+42.518479151" lastFinishedPulling="2025-10-13 05:56:50.593449707 +0000 UTC m=+50.854953126" observedRunningTime="2025-10-13 05:56:50.705950137 +0000 UTC m=+50.967453566" watchObservedRunningTime="2025-10-13 05:56:50.706737996 +0000 UTC m=+50.968241425" Oct 13 05:56:50.755208 containerd[1563]: time="2025-10-13T05:56:50.755166029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d\" id:\"ed9df774352f409ed1945bbf5f71d0968c73f6dd4accd5e1007cf06d2c697432\" pid:5227 exit_status:1 exited_at:{seconds:1760335010 nanos:754226053}" Oct 13 05:56:51.734774 containerd[1563]: time="2025-10-13T05:56:51.734733966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d\" id:\"22c247eace4928161b1a5bb9d92d280e17ea759a4701006c38961e0762cff710\" pid:5252 exited_at:{seconds:1760335011 nanos:734485268}" Oct 13 05:56:53.642475 containerd[1563]: time="2025-10-13T05:56:53.642422231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:53.643131 containerd[1563]: time="2025-10-13T05:56:53.643101146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:56:53.644339 containerd[1563]: time="2025-10-13T05:56:53.644287763Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:53.646579 containerd[1563]: time="2025-10-13T05:56:53.646537286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:53.647364 containerd[1563]: time="2025-10-13T05:56:53.647304717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.053613065s" Oct 13 05:56:53.647364 containerd[1563]: time="2025-10-13T05:56:53.647358979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:56:53.648460 containerd[1563]: time="2025-10-13T05:56:53.648189027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:56:53.649278 containerd[1563]: time="2025-10-13T05:56:53.649251561Z" level=info msg="CreateContainer within sandbox \"215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:56:53.656820 containerd[1563]: time="2025-10-13T05:56:53.656785965Z" level=info msg="Container 582c43161f33bfc872687524b1ab52d605259433fed8f9e77ab11ed91d771d50: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:53.666727 containerd[1563]: time="2025-10-13T05:56:53.666689927Z" level=info msg="CreateContainer within sandbox \"215e8354138d75c3f98fa1dfdfdffb79e3e1bd153722bd5cd06d7b37f657c295\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"582c43161f33bfc872687524b1ab52d605259433fed8f9e77ab11ed91d771d50\"" Oct 13 05:56:53.667156 containerd[1563]: time="2025-10-13T05:56:53.667133159Z" level=info msg="StartContainer for \"582c43161f33bfc872687524b1ab52d605259433fed8f9e77ab11ed91d771d50\"" Oct 13 05:56:53.668082 containerd[1563]: time="2025-10-13T05:56:53.668056713Z" level=info msg="connecting to shim 582c43161f33bfc872687524b1ab52d605259433fed8f9e77ab11ed91d771d50" address="unix:///run/containerd/s/b4c737a7ef59a3679062c1195267ff8e2d68444b9757966b1e69c5b46376fa2e" protocol=ttrpc version=3 Oct 13 05:56:53.696472 systemd[1]: Started cri-containerd-582c43161f33bfc872687524b1ab52d605259433fed8f9e77ab11ed91d771d50.scope - libcontainer container 582c43161f33bfc872687524b1ab52d605259433fed8f9e77ab11ed91d771d50. Oct 13 05:56:53.740569 containerd[1563]: time="2025-10-13T05:56:53.740536642Z" level=info msg="StartContainer for \"582c43161f33bfc872687524b1ab52d605259433fed8f9e77ab11ed91d771d50\" returns successfully" Oct 13 05:56:54.013067 containerd[1563]: time="2025-10-13T05:56:54.013020649Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:54.014157 containerd[1563]: time="2025-10-13T05:56:54.013716194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:56:54.015391 containerd[1563]: time="2025-10-13T05:56:54.015371071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 367.153049ms" Oct 13 05:56:54.015446 containerd[1563]: time="2025-10-13T05:56:54.015393974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:56:54.016635 containerd[1563]: time="2025-10-13T05:56:54.016463120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:56:54.017757 containerd[1563]: time="2025-10-13T05:56:54.017737923Z" level=info msg="CreateContainer within sandbox \"0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:56:54.030143 containerd[1563]: time="2025-10-13T05:56:54.029549093Z" level=info msg="Container 35c6150d04b9425f79a5173b22e5bc5f5d18005835ab80a87e8d2779f6e0c7d5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:54.035341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4130277573.mount: Deactivated successfully. Oct 13 05:56:54.039447 containerd[1563]: time="2025-10-13T05:56:54.039410534Z" level=info msg="CreateContainer within sandbox \"0179c4b2c914a825208cab36df0778a60335c3b93f68907ffb8a1f62cc039107\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35c6150d04b9425f79a5173b22e5bc5f5d18005835ab80a87e8d2779f6e0c7d5\"" Oct 13 05:56:54.040107 containerd[1563]: time="2025-10-13T05:56:54.040085942Z" level=info msg="StartContainer for \"35c6150d04b9425f79a5173b22e5bc5f5d18005835ab80a87e8d2779f6e0c7d5\"" Oct 13 05:56:54.041232 containerd[1563]: time="2025-10-13T05:56:54.041212967Z" level=info msg="connecting to shim 35c6150d04b9425f79a5173b22e5bc5f5d18005835ab80a87e8d2779f6e0c7d5" address="unix:///run/containerd/s/d85014fcc5e069ce497f90910f9d013de2d5e153054a80cef1d5aaaaa94b76f1" protocol=ttrpc version=3 Oct 13 05:56:54.064469 systemd[1]: Started cri-containerd-35c6150d04b9425f79a5173b22e5bc5f5d18005835ab80a87e8d2779f6e0c7d5.scope - libcontainer container 35c6150d04b9425f79a5173b22e5bc5f5d18005835ab80a87e8d2779f6e0c7d5. Oct 13 05:56:54.108669 containerd[1563]: time="2025-10-13T05:56:54.108632528Z" level=info msg="StartContainer for \"35c6150d04b9425f79a5173b22e5bc5f5d18005835ab80a87e8d2779f6e0c7d5\" returns successfully" Oct 13 05:56:54.771211 kubelet[2704]: I1013 05:56:54.769174 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cdfd954d7-h24nw" podStartSLOduration=30.489204635 podStartE2EDuration="41.769153059s" podCreationTimestamp="2025-10-13 05:56:13 +0000 UTC" firstStartedPulling="2025-10-13 05:56:42.368082415 +0000 UTC m=+42.629585844" lastFinishedPulling="2025-10-13 05:56:53.648030839 +0000 UTC m=+53.909534268" observedRunningTime="2025-10-13 05:56:54.751591564 +0000 UTC m=+55.013094993" watchObservedRunningTime="2025-10-13 05:56:54.769153059 +0000 UTC m=+55.030656478" Oct 13 05:56:54.771211 kubelet[2704]: I1013 05:56:54.769997 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cdfd954d7-pnn9d" podStartSLOduration=30.147206553 podStartE2EDuration="41.769966495s" podCreationTimestamp="2025-10-13 05:56:13 +0000 UTC" firstStartedPulling="2025-10-13 05:56:42.393300532 +0000 UTC m=+42.654803951" lastFinishedPulling="2025-10-13 05:56:54.016060474 +0000 UTC m=+54.277563893" observedRunningTime="2025-10-13 05:56:54.759092254 +0000 UTC m=+55.020595683" watchObservedRunningTime="2025-10-13 05:56:54.769966495 +0000 UTC m=+55.031469914" Oct 13 05:56:54.808001 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:51074.service - OpenSSH per-connection server daemon (10.0.0.1:51074). Oct 13 05:56:54.878127 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 51074 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:56:54.879784 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:54.885832 systemd-logind[1545]: New session 13 of user core. Oct 13 05:56:54.892563 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:56:55.034519 sshd[5357]: Connection closed by 10.0.0.1 port 51074 Oct 13 05:56:55.034781 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:55.038547 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:51074.service: Deactivated successfully. Oct 13 05:56:55.040625 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:56:55.042274 systemd-logind[1545]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:56:55.043649 systemd-logind[1545]: Removed session 13. Oct 13 05:56:55.734795 kubelet[2704]: I1013 05:56:55.734767 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:56.099056 containerd[1563]: time="2025-10-13T05:56:56.098916117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:56.099700 containerd[1563]: time="2025-10-13T05:56:56.099677787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:56:56.100811 containerd[1563]: time="2025-10-13T05:56:56.100781228Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:56.102740 containerd[1563]: time="2025-10-13T05:56:56.102709337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:56.103306 containerd[1563]: time="2025-10-13T05:56:56.103263447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.086778225s" Oct 13 05:56:56.103306 containerd[1563]: time="2025-10-13T05:56:56.103302540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:56:56.121348 containerd[1563]: time="2025-10-13T05:56:56.121306470Z" level=info msg="CreateContainer within sandbox \"f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:56:56.133347 containerd[1563]: time="2025-10-13T05:56:56.132805311Z" level=info msg="Container ddc2830f1a1a6a5e29792161e7120bbfd3ddad95003e8c9b3c59cb90d026e5c7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:56.141323 containerd[1563]: time="2025-10-13T05:56:56.141286869Z" level=info msg="CreateContainer within sandbox \"f2bc353c9ef3d68e86d3b09dca2a0515c91a8d3cb10715aea5ac5a7e72bba28a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ddc2830f1a1a6a5e29792161e7120bbfd3ddad95003e8c9b3c59cb90d026e5c7\"" Oct 13 05:56:56.142346 containerd[1563]: time="2025-10-13T05:56:56.141731744Z" level=info msg="StartContainer for \"ddc2830f1a1a6a5e29792161e7120bbfd3ddad95003e8c9b3c59cb90d026e5c7\"" Oct 13 05:56:56.143208 containerd[1563]: time="2025-10-13T05:56:56.143177258Z" level=info msg="connecting to shim ddc2830f1a1a6a5e29792161e7120bbfd3ddad95003e8c9b3c59cb90d026e5c7" address="unix:///run/containerd/s/4911261c4819b9dea65d5b1b5296489c9453ddf471d61e0547ab3d6727cdb604" protocol=ttrpc version=3 Oct 13 05:56:56.171451 systemd[1]: Started cri-containerd-ddc2830f1a1a6a5e29792161e7120bbfd3ddad95003e8c9b3c59cb90d026e5c7.scope - libcontainer container ddc2830f1a1a6a5e29792161e7120bbfd3ddad95003e8c9b3c59cb90d026e5c7. Oct 13 05:56:56.295803 containerd[1563]: time="2025-10-13T05:56:56.295765086Z" level=info msg="StartContainer for \"ddc2830f1a1a6a5e29792161e7120bbfd3ddad95003e8c9b3c59cb90d026e5c7\" returns successfully" Oct 13 05:56:56.765822 kubelet[2704]: I1013 05:56:56.765761 2704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6q48s" podStartSLOduration=25.69275805 podStartE2EDuration="41.765744031s" podCreationTimestamp="2025-10-13 05:56:15 +0000 UTC" firstStartedPulling="2025-10-13 05:56:40.036516436 +0000 UTC m=+40.298019865" lastFinishedPulling="2025-10-13 05:56:56.109502417 +0000 UTC m=+56.371005846" observedRunningTime="2025-10-13 05:56:56.765480045 +0000 UTC m=+57.026983474" watchObservedRunningTime="2025-10-13 05:56:56.765744031 +0000 UTC m=+57.027247460" Oct 13 05:56:56.888717 kubelet[2704]: I1013 05:56:56.888690 2704 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:56:56.888797 kubelet[2704]: I1013 05:56:56.888732 2704 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:56:57.147199 kubelet[2704]: I1013 05:56:57.147092 2704 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:57:00.051000 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:51090.service - OpenSSH per-connection server daemon (10.0.0.1:51090). Oct 13 05:57:00.126910 sshd[5422]: Accepted publickey for core from 10.0.0.1 port 51090 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:00.128211 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:00.132415 systemd-logind[1545]: New session 14 of user core. Oct 13 05:57:00.141453 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:57:00.261617 sshd[5425]: Connection closed by 10.0.0.1 port 51090 Oct 13 05:57:00.261943 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:00.266444 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:51090.service: Deactivated successfully. Oct 13 05:57:00.268496 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:57:00.269239 systemd-logind[1545]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:57:00.270371 systemd-logind[1545]: Removed session 14. Oct 13 05:57:05.275113 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:50544.service - OpenSSH per-connection server daemon (10.0.0.1:50544). Oct 13 05:57:05.326168 sshd[5440]: Accepted publickey for core from 10.0.0.1 port 50544 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:05.327395 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:05.331663 systemd-logind[1545]: New session 15 of user core. Oct 13 05:57:05.341476 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:57:05.454429 sshd[5445]: Connection closed by 10.0.0.1 port 50544 Oct 13 05:57:05.454789 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:05.459530 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:50544.service: Deactivated successfully. Oct 13 05:57:05.461752 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:57:05.462618 systemd-logind[1545]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:57:05.463796 systemd-logind[1545]: Removed session 15. Oct 13 05:57:08.693911 containerd[1563]: time="2025-10-13T05:57:08.693866662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f64df114644bf4feeef3f0fabb43d1b5fc2ad9aee4fbc4799e52c3ceab293a8b\" id:\"94655a70326b7ee66d955c9e3ffd28c2b8a75338a5a7d59251a502bdbf8815a9\" pid:5471 exited_at:{seconds:1760335028 nanos:693544697}" Oct 13 05:57:10.478061 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:50552.service - OpenSSH per-connection server daemon (10.0.0.1:50552). Oct 13 05:57:10.528369 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 50552 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:10.529688 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:10.533442 systemd-logind[1545]: New session 16 of user core. Oct 13 05:57:10.545460 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:57:10.655860 sshd[5489]: Connection closed by 10.0.0.1 port 50552 Oct 13 05:57:10.656183 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:10.672077 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:50552.service: Deactivated successfully. Oct 13 05:57:10.673987 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:57:10.674830 systemd-logind[1545]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:57:10.677563 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:50564.service - OpenSSH per-connection server daemon (10.0.0.1:50564). Oct 13 05:57:10.678239 systemd-logind[1545]: Removed session 16. Oct 13 05:57:10.723546 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 50564 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:10.725104 sshd-session[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:10.729303 systemd-logind[1545]: New session 17 of user core. Oct 13 05:57:10.739456 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:57:10.932958 sshd[5505]: Connection closed by 10.0.0.1 port 50564 Oct 13 05:57:10.933455 sshd-session[5502]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:10.948136 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:50564.service: Deactivated successfully. Oct 13 05:57:10.949974 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:57:10.950824 systemd-logind[1545]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:57:10.953584 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:50578.service - OpenSSH per-connection server daemon (10.0.0.1:50578). Oct 13 05:57:10.954269 systemd-logind[1545]: Removed session 17. Oct 13 05:57:11.018315 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 50578 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:11.019547 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:11.023681 systemd-logind[1545]: New session 18 of user core. Oct 13 05:57:11.042455 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:57:11.611294 sshd[5520]: Connection closed by 10.0.0.1 port 50578 Oct 13 05:57:11.612235 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:11.628916 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:33440.service - OpenSSH per-connection server daemon (10.0.0.1:33440). Oct 13 05:57:11.632942 systemd-logind[1545]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:57:11.633105 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:50578.service: Deactivated successfully. Oct 13 05:57:11.635179 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:57:11.637241 systemd-logind[1545]: Removed session 18. Oct 13 05:57:11.685653 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 33440 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:11.686885 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:11.691360 systemd-logind[1545]: New session 19 of user core. Oct 13 05:57:11.699484 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:57:11.949560 sshd[5543]: Connection closed by 10.0.0.1 port 33440 Oct 13 05:57:11.950280 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:11.962467 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:33440.service: Deactivated successfully. Oct 13 05:57:11.964729 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:57:11.965566 systemd-logind[1545]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:57:11.969158 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:33448.service - OpenSSH per-connection server daemon (10.0.0.1:33448). Oct 13 05:57:11.969981 systemd-logind[1545]: Removed session 19. Oct 13 05:57:12.017416 sshd[5554]: Accepted publickey for core from 10.0.0.1 port 33448 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:12.018970 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:12.031143 systemd-logind[1545]: New session 20 of user core. Oct 13 05:57:12.037521 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:57:12.151476 sshd[5557]: Connection closed by 10.0.0.1 port 33448 Oct 13 05:57:12.151845 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:12.156767 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:33448.service: Deactivated successfully. Oct 13 05:57:12.159159 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:57:12.160116 systemd-logind[1545]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:57:12.161458 systemd-logind[1545]: Removed session 20. Oct 13 05:57:17.168023 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:33460.service - OpenSSH per-connection server daemon (10.0.0.1:33460). Oct 13 05:57:17.239155 sshd[5572]: Accepted publickey for core from 10.0.0.1 port 33460 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:17.241195 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:17.247855 systemd-logind[1545]: New session 21 of user core. Oct 13 05:57:17.255463 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:57:17.394538 sshd[5575]: Connection closed by 10.0.0.1 port 33460 Oct 13 05:57:17.394895 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:17.399543 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:33460.service: Deactivated successfully. Oct 13 05:57:17.401647 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:57:17.402409 systemd-logind[1545]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:57:17.403596 systemd-logind[1545]: Removed session 21. Oct 13 05:57:18.587426 containerd[1563]: time="2025-10-13T05:57:18.587379987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89244607ba2a85f178cabdafdbc7bfaeda48b011aee07f43c3b055112a59d671\" id:\"6ff258787747a858f200c5310b157ca350e07bcffb7338e6af89b75e1baedc95\" pid:5599 exited_at:{seconds:1760335038 nanos:587046987}" Oct 13 05:57:18.824170 kubelet[2704]: E1013 05:57:18.824138 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:57:18.824645 kubelet[2704]: E1013 05:57:18.824287 2704 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:57:21.739876 containerd[1563]: time="2025-10-13T05:57:21.739834160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b90c22fe3e8c96808242db73e2ead5cfe13d8db1723c294837fdcbde8313b58d\" id:\"c1c4811d2d8c856df123f227bbf96d775bd35f846e4431a04055074521017fea\" pid:5629 exited_at:{seconds:1760335041 nanos:739568610}" Oct 13 05:57:22.407089 systemd[1]: Started sshd@21-10.0.0.151:22-10.0.0.1:42242.service - OpenSSH per-connection server daemon (10.0.0.1:42242). Oct 13 05:57:22.461086 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 42242 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:22.462294 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:22.466358 systemd-logind[1545]: New session 22 of user core. Oct 13 05:57:22.474453 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:57:22.610522 sshd[5646]: Connection closed by 10.0.0.1 port 42242 Oct 13 05:57:22.610842 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:22.614603 systemd[1]: sshd@21-10.0.0.151:22-10.0.0.1:42242.service: Deactivated successfully. Oct 13 05:57:22.617047 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:57:22.618592 systemd-logind[1545]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:57:22.620408 systemd-logind[1545]: Removed session 22. Oct 13 05:57:27.624019 systemd[1]: Started sshd@22-10.0.0.151:22-10.0.0.1:42252.service - OpenSSH per-connection server daemon (10.0.0.1:42252). Oct 13 05:57:27.668721 sshd[5659]: Accepted publickey for core from 10.0.0.1 port 42252 ssh2: RSA SHA256:BqmTwFAF1TZ7rFNjtYQr76YVYWLR4pW5wdeW8GgMCQQ Oct 13 05:57:27.670541 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:27.675416 systemd-logind[1545]: New session 23 of user core. Oct 13 05:57:27.683472 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:57:27.844435 sshd[5662]: Connection closed by 10.0.0.1 port 42252 Oct 13 05:57:27.844744 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:27.849189 systemd[1]: sshd@22-10.0.0.151:22-10.0.0.1:42252.service: Deactivated successfully. Oct 13 05:57:27.851353 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:57:27.852107 systemd-logind[1545]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:57:27.853588 systemd-logind[1545]: Removed session 23.