Oct 13 05:25:29.671045 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:25:29.671083 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:25:29.671099 kernel: BIOS-provided physical RAM map: Oct 13 05:25:29.671109 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 13 05:25:29.671119 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 13 05:25:29.671128 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 13 05:25:29.671140 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 13 05:25:29.671150 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 13 05:25:29.671162 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 13 05:25:29.671172 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 13 05:25:29.671185 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 13 05:25:29.671195 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 13 05:25:29.671204 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 13 05:25:29.671215 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 13 05:25:29.671227 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 13 05:25:29.671240 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 13 05:25:29.671253 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:25:29.671264 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:25:29.671274 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:25:29.671285 kernel: NX (Execute Disable) protection: active Oct 13 05:25:29.671295 kernel: APIC: Static calls initialized Oct 13 05:25:29.671306 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Oct 13 05:25:29.671317 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Oct 13 05:25:29.671327 kernel: extended physical RAM map: Oct 13 05:25:29.671340 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 13 05:25:29.671351 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 13 05:25:29.671362 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 13 05:25:29.671373 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 13 05:25:29.671383 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Oct 13 05:25:29.671394 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Oct 13 05:25:29.671404 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Oct 13 05:25:29.671414 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Oct 13 05:25:29.671425 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Oct 13 05:25:29.671435 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 13 05:25:29.671446 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 13 05:25:29.671459 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 13 05:25:29.671470 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 13 05:25:29.671480 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 13 05:25:29.671491 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 13 05:25:29.671502 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 13 05:25:29.671517 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 13 05:25:29.671530 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:25:29.671541 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:25:29.671552 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:25:29.671563 kernel: efi: EFI v2.7 by EDK II Oct 13 05:25:29.671575 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 13 05:25:29.671586 kernel: random: crng init done Oct 13 05:25:29.671604 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 13 05:25:29.671615 kernel: secureboot: Secure boot enabled Oct 13 05:25:29.671629 kernel: SMBIOS 2.8 present. Oct 13 05:25:29.671641 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 13 05:25:29.671652 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:25:29.671663 kernel: Hypervisor detected: KVM Oct 13 05:25:29.671674 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:25:29.671685 kernel: kvm-clock: using sched offset of 6227564519 cycles Oct 13 05:25:29.671698 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:25:29.671710 kernel: tsc: Detected 2794.746 MHz processor Oct 13 05:25:29.671721 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:25:29.671736 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:25:29.671747 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 13 05:25:29.671759 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:25:29.671790 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:25:29.671802 kernel: Using GB pages for direct mapping Oct 13 05:25:29.671816 kernel: ACPI: Early table checksum verification disabled Oct 13 05:25:29.671828 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 13 05:25:29.671843 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 13 05:25:29.671867 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.671879 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.671890 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 13 05:25:29.671902 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.671917 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.671928 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.671943 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.671955 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 13 05:25:29.671966 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 13 05:25:29.671979 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 13 05:25:29.671990 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 13 05:25:29.672001 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 13 05:25:29.672012 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 13 05:25:29.672022 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 13 05:25:29.672037 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 13 05:25:29.672049 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 13 05:25:29.672060 kernel: No NUMA configuration found Oct 13 05:25:29.672077 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 13 05:25:29.672089 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 13 05:25:29.672100 kernel: Zone ranges: Oct 13 05:25:29.672108 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:25:29.672121 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 13 05:25:29.672129 kernel: Normal empty Oct 13 05:25:29.672137 kernel: Device empty Oct 13 05:25:29.672145 kernel: Movable zone start for each node Oct 13 05:25:29.672153 kernel: Early memory node ranges Oct 13 05:25:29.672161 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 13 05:25:29.672169 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 13 05:25:29.672177 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 13 05:25:29.672187 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 13 05:25:29.672199 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 13 05:25:29.672207 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 13 05:25:29.672215 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:25:29.672223 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 13 05:25:29.672231 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:25:29.672239 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 13 05:25:29.672250 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 13 05:25:29.672258 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 13 05:25:29.672266 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:25:29.672274 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:25:29.672282 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:25:29.672290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:25:29.672298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:25:29.672312 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:25:29.672320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:25:29.672329 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:25:29.672339 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:25:29.672357 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:25:29.672368 kernel: TSC deadline timer available Oct 13 05:25:29.672379 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:25:29.672390 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:25:29.672406 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:25:29.672426 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:25:29.672440 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:25:29.672451 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:25:29.672462 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:25:29.672478 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:25:29.672489 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:25:29.672500 kernel: kvm-guest: setup PV sched yield Oct 13 05:25:29.672512 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 13 05:25:29.672527 kernel: Booting paravirtualized kernel on KVM Oct 13 05:25:29.672538 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:25:29.672550 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:25:29.672561 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:25:29.672575 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:25:29.672587 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:25:29.672598 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:25:29.672609 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:25:29.672623 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:25:29.672633 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:25:29.672641 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:25:29.672654 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:25:29.672662 kernel: Fallback order for Node 0: 0 Oct 13 05:25:29.672671 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 13 05:25:29.672685 kernel: Policy zone: DMA32 Oct 13 05:25:29.672701 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:25:29.672713 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:25:29.672724 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:25:29.672740 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:25:29.672751 kernel: Dynamic Preempt: voluntary Oct 13 05:25:29.672762 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:25:29.672805 kernel: rcu: RCU event tracing is enabled. Oct 13 05:25:29.672817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:25:29.672828 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:25:29.672839 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:25:29.672867 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:25:29.672878 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:25:29.672888 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:25:29.672899 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:25:29.672911 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:25:29.672926 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:25:29.672938 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:25:29.672952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:25:29.672964 kernel: Console: colour dummy device 80x25 Oct 13 05:25:29.672975 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:25:29.672986 kernel: ACPI: Core revision 20240827 Oct 13 05:25:29.672997 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:25:29.673008 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:25:29.673019 kernel: x2apic enabled Oct 13 05:25:29.673030 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:25:29.673044 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:25:29.673056 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:25:29.673067 kernel: kvm-guest: setup PV IPIs Oct 13 05:25:29.673078 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:25:29.673088 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Oct 13 05:25:29.673099 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Oct 13 05:25:29.673110 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:25:29.673124 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:25:29.673135 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:25:29.673149 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:25:29.673161 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:25:29.673172 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:25:29.673183 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:25:29.673194 kernel: active return thunk: retbleed_return_thunk Oct 13 05:25:29.673207 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:25:29.673219 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:25:29.673230 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:25:29.673242 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:25:29.673254 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:25:29.673265 kernel: active return thunk: srso_return_thunk Oct 13 05:25:29.673280 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:25:29.673291 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:25:29.673302 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:25:29.673312 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:25:29.673324 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:25:29.673336 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:25:29.673346 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:25:29.673360 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:25:29.673372 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:25:29.673383 kernel: landlock: Up and running. Oct 13 05:25:29.673394 kernel: SELinux: Initializing. Oct 13 05:25:29.673406 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.673417 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.673428 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:25:29.673443 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:25:29.673454 kernel: ... version: 0 Oct 13 05:25:29.673468 kernel: ... bit width: 48 Oct 13 05:25:29.673480 kernel: ... generic registers: 6 Oct 13 05:25:29.673491 kernel: ... value mask: 0000ffffffffffff Oct 13 05:25:29.673503 kernel: ... max period: 00007fffffffffff Oct 13 05:25:29.673514 kernel: ... fixed-purpose events: 0 Oct 13 05:25:29.673525 kernel: ... event mask: 000000000000003f Oct 13 05:25:29.673540 kernel: signal: max sigframe size: 1776 Oct 13 05:25:29.673550 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:25:29.673562 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:25:29.673574 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:25:29.673586 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:25:29.673597 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:25:29.673615 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:25:29.673630 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:25:29.673641 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Oct 13 05:25:29.673653 kernel: Memory: 2439932K/2552216K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 106344K reserved, 0K cma-reserved) Oct 13 05:25:29.673664 kernel: devtmpfs: initialized Oct 13 05:25:29.673675 kernel: x86/mm: Memory block size: 128MB Oct 13 05:25:29.673687 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 13 05:25:29.673697 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 13 05:25:29.673712 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:25:29.673724 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.673735 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:25:29.673746 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:25:29.673757 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:25:29.673784 kernel: audit: type=2000 audit(1760333127.247:1): state=initialized audit_enabled=0 res=1 Oct 13 05:25:29.673797 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:25:29.673813 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:25:29.673824 kernel: cpuidle: using governor menu Oct 13 05:25:29.673835 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:25:29.673847 kernel: dca service started, version 1.12.1 Oct 13 05:25:29.673881 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 13 05:25:29.673895 kernel: PCI: Using configuration type 1 for base access Oct 13 05:25:29.673907 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:25:29.673924 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:25:29.673935 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:25:29.673947 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:25:29.673958 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:25:29.673969 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:25:29.673981 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:25:29.673992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:25:29.674008 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:25:29.674018 kernel: ACPI: Interpreter enabled Oct 13 05:25:29.674029 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:25:29.674040 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:25:29.674052 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:25:29.674062 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:25:29.674073 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:25:29.674087 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:25:29.674479 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:25:29.674720 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:25:29.675014 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:25:29.675033 kernel: PCI host bridge to bus 0000:00 Oct 13 05:25:29.675294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:25:29.675505 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:25:29.675702 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:25:29.675983 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 13 05:25:29.676199 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 13 05:25:29.676416 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:25:29.676642 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:25:29.676943 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:25:29.677193 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:25:29.677420 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 13 05:25:29.677656 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 13 05:25:29.677889 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 13 05:25:29.678102 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:25:29.678298 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:25:29.678491 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 13 05:25:29.678678 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 13 05:25:29.678926 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 13 05:25:29.679205 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:25:29.679480 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 13 05:25:29.679743 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 13 05:25:29.680011 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 13 05:25:29.680277 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:25:29.680518 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 13 05:25:29.680745 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 13 05:25:29.681058 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 13 05:25:29.681394 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 13 05:25:29.681681 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:25:29.681999 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:25:29.682265 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:25:29.682513 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 13 05:25:29.682734 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 13 05:25:29.683004 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:25:29.683216 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 13 05:25:29.683232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:25:29.683251 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:25:29.683268 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:25:29.683280 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:25:29.683292 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:25:29.683304 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:25:29.683317 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:25:29.683329 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:25:29.683345 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:25:29.683357 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:25:29.683369 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:25:29.683381 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:25:29.683394 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:25:29.683406 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:25:29.683418 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:25:29.683433 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:25:29.683445 kernel: iommu: Default domain type: Translated Oct 13 05:25:29.683456 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:25:29.683587 kernel: efivars: Registered efivars operations Oct 13 05:25:29.683600 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:25:29.683612 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:25:29.683624 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 13 05:25:29.683637 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Oct 13 05:25:29.683651 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Oct 13 05:25:29.683667 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 13 05:25:29.683680 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 13 05:25:29.683923 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:25:29.684135 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:25:29.684348 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:25:29.684370 kernel: vgaarb: loaded Oct 13 05:25:29.684382 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:25:29.684395 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:25:29.684407 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:25:29.684419 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:25:29.684432 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:25:29.684444 kernel: pnp: PnP ACPI init Oct 13 05:25:29.684675 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 13 05:25:29.684699 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:25:29.684711 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:25:29.684724 kernel: NET: Registered PF_INET protocol family Oct 13 05:25:29.684736 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:25:29.684749 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:25:29.684761 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:25:29.684797 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:25:29.684809 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:25:29.684821 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:25:29.684831 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.684843 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.684866 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:25:29.684878 kernel: NET: Registered PF_XDP protocol family Oct 13 05:25:29.685091 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 13 05:25:29.685305 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 13 05:25:29.685506 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:25:29.685702 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:25:29.685962 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:25:29.686160 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 13 05:25:29.686375 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 13 05:25:29.686574 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:25:29.686590 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:25:29.686603 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Oct 13 05:25:29.686615 kernel: Initialise system trusted keyrings Oct 13 05:25:29.686627 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:25:29.686639 kernel: Key type asymmetric registered Oct 13 05:25:29.686656 kernel: Asymmetric key parser 'x509' registered Oct 13 05:25:29.686687 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:25:29.686702 kernel: io scheduler mq-deadline registered Oct 13 05:25:29.686714 kernel: io scheduler kyber registered Oct 13 05:25:29.686728 kernel: io scheduler bfq registered Oct 13 05:25:29.686740 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:25:29.686754 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:25:29.686785 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:25:29.686798 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:25:29.686810 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:25:29.686823 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:25:29.686836 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:25:29.686848 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:25:29.686870 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:25:29.687105 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:25:29.687124 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:25:29.687327 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:25:29.687525 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:25:27 UTC (1760333127) Oct 13 05:25:29.687740 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 13 05:25:29.687758 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:25:29.687790 kernel: efifb: probing for efifb Oct 13 05:25:29.687808 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 13 05:25:29.687819 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 13 05:25:29.687830 kernel: efifb: scrolling: redraw Oct 13 05:25:29.687841 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:25:29.687866 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 05:25:29.687878 kernel: fb0: EFI VGA frame buffer device Oct 13 05:25:29.687893 kernel: pstore: Using crash dump compression: deflate Oct 13 05:25:29.687905 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:25:29.687915 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:25:29.687928 kernel: Segment Routing with IPv6 Oct 13 05:25:29.687939 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:25:29.687956 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:25:29.687967 kernel: Key type dns_resolver registered Oct 13 05:25:29.687977 kernel: IPI shorthand broadcast: enabled Oct 13 05:25:29.687988 kernel: sched_clock: Marking stable (1670002855, 351445315)->(2122163630, -100715460) Oct 13 05:25:29.687998 kernel: registered taskstats version 1 Oct 13 05:25:29.688009 kernel: Loading compiled-in X.509 certificates Oct 13 05:25:29.688021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:25:29.688035 kernel: Demotion targets for Node 0: null Oct 13 05:25:29.688045 kernel: Key type .fscrypt registered Oct 13 05:25:29.688056 kernel: Key type fscrypt-provisioning registered Oct 13 05:25:29.688066 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:25:29.688077 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:25:29.688087 kernel: ima: No architecture policies found Oct 13 05:25:29.688099 kernel: clk: Disabling unused clocks Oct 13 05:25:29.688113 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:25:29.688124 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:25:29.688136 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:25:29.688147 kernel: Run /init as init process Oct 13 05:25:29.688159 kernel: with arguments: Oct 13 05:25:29.688172 kernel: /init Oct 13 05:25:29.688184 kernel: with environment: Oct 13 05:25:29.688198 kernel: HOME=/ Oct 13 05:25:29.688210 kernel: TERM=linux Oct 13 05:25:29.688222 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:25:29.688237 kernel: SCSI subsystem initialized Oct 13 05:25:29.688254 kernel: libata version 3.00 loaded. Oct 13 05:25:29.688495 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:25:29.688535 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:25:29.688832 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:25:29.689116 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:25:29.689339 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:25:29.689617 kernel: scsi host0: ahci Oct 13 05:25:29.689907 kernel: scsi host1: ahci Oct 13 05:25:29.690166 kernel: scsi host2: ahci Oct 13 05:25:29.690414 kernel: scsi host3: ahci Oct 13 05:25:29.690666 kernel: scsi host4: ahci Oct 13 05:25:29.691019 kernel: scsi host5: ahci Oct 13 05:25:29.691042 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 13 05:25:29.691055 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 13 05:25:29.691067 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 13 05:25:29.691091 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 13 05:25:29.691104 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 13 05:25:29.691116 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 13 05:25:29.691128 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.691140 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:25:29.691153 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.691165 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.691180 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.691192 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.691204 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:25:29.691216 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:25:29.691227 kernel: ata3.00: applying bridge limits Oct 13 05:25:29.691239 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:25:29.691251 kernel: ata3.00: configured for UDMA/100 Oct 13 05:25:29.691552 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:25:29.691839 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:25:29.692140 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 05:25:29.692165 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:25:29.692178 kernel: GPT:16515071 != 27000831 Oct 13 05:25:29.692189 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:25:29.692205 kernel: GPT:16515071 != 27000831 Oct 13 05:25:29.692216 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:25:29.692228 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:25:29.692240 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.692495 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:25:29.692513 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:25:29.692750 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:25:29.692810 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:25:29.692823 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:25:29.692842 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:25:29.692875 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:25:29.692886 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.692897 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.692908 kernel: raid6: avx2x4 gen() 21616 MB/s Oct 13 05:25:29.692925 kernel: raid6: avx2x2 gen() 26954 MB/s Oct 13 05:25:29.692936 kernel: raid6: avx2x1 gen() 18796 MB/s Oct 13 05:25:29.692949 kernel: raid6: using algorithm avx2x2 gen() 26954 MB/s Oct 13 05:25:29.692962 kernel: raid6: .... xor() 13278 MB/s, rmw enabled Oct 13 05:25:29.692974 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:25:29.692986 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.692997 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.693009 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.693024 kernel: xor: automatically using best checksumming function avx Oct 13 05:25:29.693037 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.693048 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:25:29.693061 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (194) Oct 13 05:25:29.693073 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:25:29.693085 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:29.693097 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:25:29.693112 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:25:29.693124 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.693135 kernel: loop: module loaded Oct 13 05:25:29.693147 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:25:29.693159 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:25:29.693171 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:25:29.693189 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:25:29.693205 systemd[1]: Detected virtualization kvm. Oct 13 05:25:29.693217 systemd[1]: Detected architecture x86-64. Oct 13 05:25:29.693229 systemd[1]: Running in initrd. Oct 13 05:25:29.693250 systemd[1]: No hostname configured, using default hostname. Oct 13 05:25:29.693264 systemd[1]: Hostname set to . Oct 13 05:25:29.693276 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:25:29.693299 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:25:29.693317 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:25:29.693329 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:25:29.693346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:25:29.693359 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:25:29.693371 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:25:29.693391 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:25:29.693403 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:25:29.693415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:25:29.693426 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:25:29.693438 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:25:29.693449 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:25:29.693468 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:25:29.693479 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:25:29.693490 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:25:29.693501 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:25:29.693513 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:25:29.693524 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:25:29.693535 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:25:29.693549 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:25:29.693560 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:25:29.693571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:25:29.693584 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:25:29.693596 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:25:29.693608 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:25:29.693622 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:25:29.693633 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:25:29.693646 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:25:29.693658 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:25:29.693670 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:25:29.693682 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:25:29.693694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:29.693709 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:25:29.693721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:25:29.693734 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:25:29.693746 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:25:29.693895 systemd-journald[329]: Collecting audit messages is disabled. Oct 13 05:25:29.693925 systemd-journald[329]: Journal started Oct 13 05:25:29.693952 systemd-journald[329]: Runtime Journal (/run/log/journal/cc17e8ce482c420d88d5ed876add006e) is 6M, max 48.2M, 42.2M free. Oct 13 05:25:29.696836 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:25:29.700055 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:25:29.704848 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:25:29.712462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:25:29.720929 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:25:29.725305 systemd-modules-load[331]: Inserted module 'br_netfilter' Oct 13 05:25:29.726338 kernel: Bridge firewalling registered Oct 13 05:25:29.726676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:25:29.731135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:25:29.736427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:29.740783 systemd-tmpfiles[348]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:25:29.750553 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:25:29.756230 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:25:29.763831 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:25:29.779242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:25:29.782705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:25:29.805418 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:25:29.812086 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:25:29.839559 systemd-resolved[362]: Positive Trust Anchors: Oct 13 05:25:29.839580 systemd-resolved[362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:25:29.839584 systemd-resolved[362]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:25:29.839615 systemd-resolved[362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:25:29.865288 dracut-cmdline[374]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:25:29.872811 systemd-resolved[362]: Defaulting to hostname 'linux'. Oct 13 05:25:29.875909 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:25:29.876532 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:25:30.035856 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:25:30.059267 kernel: iscsi: registered transport (tcp) Oct 13 05:25:30.113368 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:25:30.113464 kernel: QLogic iSCSI HBA Driver Oct 13 05:25:30.153134 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:25:30.193981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:25:30.196110 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:25:30.293098 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:25:30.298562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:25:30.301346 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:25:30.357297 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:25:30.360486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:25:30.393431 systemd-udevd[615]: Using default interface naming scheme 'v257'. Oct 13 05:25:30.409388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:25:30.412799 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:25:30.445763 dracut-pre-trigger[676]: rd.md=0: removing MD RAID activation Oct 13 05:25:30.455276 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:25:30.462206 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:25:30.479374 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:25:30.485219 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:25:30.529569 systemd-networkd[731]: lo: Link UP Oct 13 05:25:30.529587 systemd-networkd[731]: lo: Gained carrier Oct 13 05:25:30.530571 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:25:30.533260 systemd[1]: Reached target network.target - Network. Oct 13 05:25:30.603171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:25:30.609632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:25:30.730676 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:25:30.745576 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:25:30.793520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:25:30.807861 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:25:30.814814 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:25:30.817746 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:25:30.829286 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:25:30.834386 kernel: AES CTR mode by8 optimization enabled Oct 13 05:25:30.839481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:25:30.842101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:30.848131 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:30.850746 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:30.850759 systemd-networkd[731]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:25:30.851387 systemd-networkd[731]: eth0: Link UP Oct 13 05:25:30.857351 systemd-networkd[731]: eth0: Gained carrier Oct 13 05:25:30.873914 disk-uuid[830]: Primary Header is updated. Oct 13 05:25:30.873914 disk-uuid[830]: Secondary Entries is updated. Oct 13 05:25:30.873914 disk-uuid[830]: Secondary Header is updated. Oct 13 05:25:30.857376 systemd-networkd[731]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:30.865289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:30.883090 systemd-networkd[731]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:25:30.921714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:30.976690 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:25:30.979590 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:25:30.982612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:25:30.984644 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:25:30.990943 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:25:31.019793 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:25:31.181247 systemd-resolved[362]: Detected conflict on linux IN A 10.0.0.16 Oct 13 05:25:31.181267 systemd-resolved[362]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Oct 13 05:25:31.935582 disk-uuid[838]: Warning: The kernel is still using the old partition table. Oct 13 05:25:31.935582 disk-uuid[838]: The new table will be used at the next reboot or after you Oct 13 05:25:31.935582 disk-uuid[838]: run partprobe(8) or kpartx(8) Oct 13 05:25:31.935582 disk-uuid[838]: The operation has completed successfully. Oct 13 05:25:31.951865 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:25:31.952166 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:25:31.956546 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:25:32.001845 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (875) Oct 13 05:25:32.001928 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:32.005269 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:32.009563 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:25:32.009607 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:25:32.018787 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:32.019633 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:25:32.024090 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:25:32.313157 ignition[894]: Ignition 2.22.0 Oct 13 05:25:32.313174 ignition[894]: Stage: fetch-offline Oct 13 05:25:32.313222 ignition[894]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:32.313233 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:32.313332 ignition[894]: parsed url from cmdline: "" Oct 13 05:25:32.313335 ignition[894]: no config URL provided Oct 13 05:25:32.313340 ignition[894]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:25:32.313351 ignition[894]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:25:32.313402 ignition[894]: op(1): [started] loading QEMU firmware config module Oct 13 05:25:32.313408 ignition[894]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:25:32.334994 ignition[894]: op(1): [finished] loading QEMU firmware config module Oct 13 05:25:32.419159 ignition[894]: parsing config with SHA512: df90e5f344dc5db2c84cda0d90449984b3a1f094a69b03485580aa49ecb359822216c78aa831a0eed0294337fe9a6301938fc660c1dd898f719ebe9ab75fe865 Oct 13 05:25:32.429616 unknown[894]: fetched base config from "system" Oct 13 05:25:32.430826 unknown[894]: fetched user config from "qemu" Oct 13 05:25:32.431266 ignition[894]: fetch-offline: fetch-offline passed Oct 13 05:25:32.431343 ignition[894]: Ignition finished successfully Oct 13 05:25:32.435330 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:25:32.439728 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:25:32.443712 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:25:32.519010 ignition[905]: Ignition 2.22.0 Oct 13 05:25:32.519024 ignition[905]: Stage: kargs Oct 13 05:25:32.519203 ignition[905]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:32.519219 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:32.519973 ignition[905]: kargs: kargs passed Oct 13 05:25:32.520026 ignition[905]: Ignition finished successfully Oct 13 05:25:32.530224 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:25:32.534856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:25:32.572348 ignition[913]: Ignition 2.22.0 Oct 13 05:25:32.572362 ignition[913]: Stage: disks Oct 13 05:25:32.572504 ignition[913]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:32.572514 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:32.577850 ignition[913]: disks: disks passed Oct 13 05:25:32.577903 ignition[913]: Ignition finished successfully Oct 13 05:25:32.582520 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:25:32.586886 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:25:32.587622 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:25:32.588443 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:25:32.589584 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:25:32.590403 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:25:32.602745 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:25:32.708112 systemd-fsck[923]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 05:25:32.716499 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:25:32.719003 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:25:32.763985 systemd-networkd[731]: eth0: Gained IPv6LL Oct 13 05:25:32.843804 kernel: EXT4-fs (vda9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:25:32.844254 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:25:32.846356 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:25:32.849993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:25:32.852761 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:25:32.854435 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:25:32.854471 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:25:32.854495 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:25:32.875963 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (932) Oct 13 05:25:32.876002 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:32.876017 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:32.864889 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:25:32.880458 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:25:32.880479 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:25:32.868465 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:25:32.881843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:25:32.929074 initrd-setup-root[956]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:25:32.936331 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:25:32.940614 initrd-setup-root[970]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:25:32.946793 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:25:33.051450 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:25:33.055186 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:25:33.058005 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:25:33.081905 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:25:33.084530 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:33.098954 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:25:33.143177 ignition[1046]: INFO : Ignition 2.22.0 Oct 13 05:25:33.143177 ignition[1046]: INFO : Stage: mount Oct 13 05:25:33.146070 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:33.146070 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:33.146070 ignition[1046]: INFO : mount: mount passed Oct 13 05:25:33.146070 ignition[1046]: INFO : Ignition finished successfully Oct 13 05:25:33.147422 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:25:33.150936 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:25:33.179404 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:25:33.205366 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1058) Oct 13 05:25:33.208602 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:33.208628 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:33.212470 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:25:33.212494 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:25:33.214814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:25:33.264009 ignition[1075]: INFO : Ignition 2.22.0 Oct 13 05:25:33.264009 ignition[1075]: INFO : Stage: files Oct 13 05:25:33.267222 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:33.267222 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:33.267222 ignition[1075]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:25:33.273595 ignition[1075]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:25:33.273595 ignition[1075]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:25:33.273595 ignition[1075]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:25:33.273595 ignition[1075]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:25:33.273595 ignition[1075]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:25:33.273595 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:25:33.271787 unknown[1075]: wrote ssh authorized keys file for user: core Oct 13 05:25:33.291078 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:25:33.337855 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:25:33.408446 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:25:33.408446 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:25:33.414936 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:25:33.417988 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:25:33.420962 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:25:33.420962 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:25:33.420962 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:25:33.420962 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:25:33.420962 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:25:33.435980 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:25:33.435980 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:25:33.435980 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:25:33.435980 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:25:33.435980 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:25:33.435980 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:25:33.992156 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:25:35.897471 ignition[1075]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:25:35.897471 ignition[1075]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:25:35.905619 ignition[1075]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:25:36.034518 ignition[1075]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:25:36.034518 ignition[1075]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:25:36.034518 ignition[1075]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:25:36.034518 ignition[1075]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:25:36.062851 ignition[1075]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:25:36.062851 ignition[1075]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:25:36.062851 ignition[1075]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:25:36.099141 ignition[1075]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:25:36.109669 ignition[1075]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:25:36.112395 ignition[1075]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:25:36.112395 ignition[1075]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:25:36.112395 ignition[1075]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:25:36.112395 ignition[1075]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:25:36.112395 ignition[1075]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:25:36.112395 ignition[1075]: INFO : files: files passed Oct 13 05:25:36.112395 ignition[1075]: INFO : Ignition finished successfully Oct 13 05:25:36.119216 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:25:36.126562 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:25:36.130107 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:25:36.155845 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:25:36.156002 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:25:36.163964 initrd-setup-root-after-ignition[1104]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:25:36.169612 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:25:36.172657 initrd-setup-root-after-ignition[1106]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:25:36.175539 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:25:36.174060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:25:36.176460 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:25:36.178258 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:25:36.274159 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:25:36.274325 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:25:36.279073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:25:36.279498 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:25:36.280797 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:25:36.282006 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:25:36.314678 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:25:36.321658 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:25:36.350358 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:25:36.350640 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:25:36.351550 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:25:36.352149 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:25:36.352659 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:25:36.352829 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:25:36.365912 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:25:36.367289 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:25:36.371577 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:25:36.374386 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:25:36.377819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:25:36.381420 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:25:36.384913 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:25:36.388432 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:25:36.391601 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:25:36.395459 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:25:36.398634 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:25:36.435144 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:25:36.435279 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:25:36.439822 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:25:36.443272 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:25:36.444340 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:25:36.444565 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:25:36.452331 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:25:36.452546 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:25:36.458717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:25:36.458895 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:25:36.459888 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:25:36.464273 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:25:36.467852 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:25:36.481801 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:25:36.482656 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:25:36.490261 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:25:36.490453 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:25:36.491392 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:25:36.491570 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:25:36.494363 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:25:36.494486 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:25:36.497708 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:25:36.497958 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:25:36.505061 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:25:36.506177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:25:36.506381 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:25:36.512036 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:25:36.514543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:25:36.514794 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:25:36.515614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:25:36.515855 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:25:36.522409 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:25:36.522608 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:25:36.534807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:25:36.542071 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:25:36.568048 ignition[1131]: INFO : Ignition 2.22.0 Oct 13 05:25:36.568048 ignition[1131]: INFO : Stage: umount Oct 13 05:25:36.581593 ignition[1131]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:36.581593 ignition[1131]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:36.581593 ignition[1131]: INFO : umount: umount passed Oct 13 05:25:36.581593 ignition[1131]: INFO : Ignition finished successfully Oct 13 05:25:36.572315 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:25:36.575345 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:25:36.575497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:25:36.581171 systemd[1]: Stopped target network.target - Network. Oct 13 05:25:36.582579 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:25:36.582639 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:25:36.585515 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:25:36.585574 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:25:36.588837 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:25:36.588892 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:25:36.591552 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:25:36.591608 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:25:36.595235 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:25:36.598260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:25:36.616162 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:25:36.616401 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:25:36.624522 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:25:36.624647 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:25:36.629612 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:25:36.630606 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:25:36.630654 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:25:36.635008 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:25:36.637558 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:25:36.637620 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:25:36.641492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:25:36.641547 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:25:36.645322 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:25:36.645376 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:25:36.648873 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:25:36.672558 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:25:36.672696 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:25:36.673907 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:25:36.674018 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:25:36.683286 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:25:36.683495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:25:36.687560 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:25:36.687611 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:25:36.732538 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:25:36.732580 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:25:36.733145 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:25:36.733203 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:25:36.740700 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:25:36.740795 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:25:36.746302 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:25:36.746369 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:25:36.754574 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:25:36.756122 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:25:36.756198 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:25:36.759454 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:25:36.759518 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:25:36.767570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:25:36.767641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:36.795059 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:25:36.795973 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:25:36.804601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:25:36.804751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:25:36.808925 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:25:36.812696 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:25:36.850606 systemd[1]: Switching root. Oct 13 05:25:36.893472 systemd-journald[329]: Journal stopped Oct 13 05:25:39.084937 systemd-journald[329]: Received SIGTERM from PID 1 (systemd). Oct 13 05:25:39.085028 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:25:39.085050 kernel: SELinux: policy capability open_perms=1 Oct 13 05:25:39.085066 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:25:39.085092 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:25:39.085109 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:25:39.085126 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:25:39.085142 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:25:39.085159 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:25:39.085176 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:25:39.085197 kernel: audit: type=1403 audit(1760333137.650:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:25:39.085224 systemd[1]: Successfully loaded SELinux policy in 76.130ms. Oct 13 05:25:39.085253 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.166ms. Oct 13 05:25:39.085269 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:25:39.085285 systemd[1]: Detected virtualization kvm. Oct 13 05:25:39.085300 systemd[1]: Detected architecture x86-64. Oct 13 05:25:39.085314 systemd[1]: Detected first boot. Oct 13 05:25:39.085337 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:25:39.085352 zram_generator::config[1177]: No configuration found. Oct 13 05:25:39.085369 kernel: Guest personality initialized and is inactive Oct 13 05:25:39.085384 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:25:39.085409 kernel: Initialized host personality Oct 13 05:25:39.085423 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:25:39.085439 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:25:39.085465 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:25:39.085484 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:25:39.085502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:25:39.085521 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:25:39.085540 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:25:39.085557 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:25:39.085575 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:25:39.085599 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:25:39.085630 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:25:39.085648 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:25:39.085665 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:25:39.085683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:25:39.085701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:25:39.085719 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:25:39.085746 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:25:39.085765 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:25:39.085817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:25:39.085835 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:25:39.085852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:25:39.085877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:25:39.085891 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:25:39.085904 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:25:39.085922 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:25:39.085935 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:25:39.085948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:25:39.085962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:25:39.085977 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:25:39.086002 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:25:39.086021 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:25:39.086039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:25:39.086059 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:25:39.086225 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:25:39.086243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:25:39.086262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:25:39.086286 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:25:39.086304 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:25:39.086322 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:25:39.086340 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:25:39.086357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:39.086374 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:25:39.086392 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:25:39.086412 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:25:39.086430 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:25:39.086448 systemd[1]: Reached target machines.target - Containers. Oct 13 05:25:39.086465 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:25:39.086483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:39.086500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:25:39.086516 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:25:39.086543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:25:39.086559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:25:39.086576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:25:39.086594 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:25:39.086634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:25:39.086653 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:25:39.086681 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:25:39.086698 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:25:39.086715 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:25:39.086732 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:25:39.086750 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:39.086787 kernel: fuse: init (API version 7.41) Oct 13 05:25:39.086805 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:25:39.086831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:25:39.086848 kernel: ACPI: bus type drm_connector registered Oct 13 05:25:39.086865 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:25:39.086883 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:25:39.086900 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:25:39.086918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:25:39.086943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:39.086960 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:25:39.087004 systemd-journald[1255]: Collecting audit messages is disabled. Oct 13 05:25:39.087035 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:25:39.087056 systemd-journald[1255]: Journal started Oct 13 05:25:39.087086 systemd-journald[1255]: Runtime Journal (/run/log/journal/cc17e8ce482c420d88d5ed876add006e) is 6M, max 48.2M, 42.2M free. Oct 13 05:25:38.584852 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:25:38.608782 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:25:38.609508 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:25:39.091850 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:25:39.093592 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:25:39.095672 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:25:39.097894 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:25:39.100007 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:25:39.102310 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:25:39.104859 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:25:39.107505 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:25:39.107749 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:25:39.110121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:25:39.110353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:25:39.112811 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:25:39.113165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:25:39.115578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:25:39.116003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:25:39.118592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:25:39.118887 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:25:39.121232 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:25:39.121522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:25:39.124031 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:25:39.126601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:25:39.130327 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:25:39.133204 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:25:39.150486 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:25:39.153633 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:25:39.157350 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:25:39.160235 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:25:39.162028 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:25:39.162140 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:25:39.164978 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:25:39.167369 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:39.174713 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:25:39.177756 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:25:39.178596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:25:39.180536 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:25:39.183889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:25:39.196970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:25:39.200683 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:25:39.204728 systemd-journald[1255]: Time spent on flushing to /var/log/journal/cc17e8ce482c420d88d5ed876add006e is 78.050ms for 1031 entries. Oct 13 05:25:39.204728 systemd-journald[1255]: System Journal (/var/log/journal/cc17e8ce482c420d88d5ed876add006e) is 8M, max 163.5M, 155.5M free. Oct 13 05:25:39.294937 systemd-journald[1255]: Received client request to flush runtime journal. Oct 13 05:25:39.294974 kernel: loop1: detected capacity change from 0 to 128048 Oct 13 05:25:39.206643 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:25:39.213519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:25:39.217200 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:25:39.219796 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:25:39.223546 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:25:39.230450 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:25:39.235218 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:25:39.292168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:25:39.297303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:25:39.321497 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:25:39.328125 kernel: loop2: detected capacity change from 0 to 219144 Oct 13 05:25:39.408158 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:25:39.413933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:25:39.417018 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:25:39.424861 kernel: loop3: detected capacity change from 0 to 110984 Oct 13 05:25:39.434107 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:25:39.455928 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Oct 13 05:25:39.455951 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Oct 13 05:25:39.463482 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:25:39.467909 kernel: loop4: detected capacity change from 0 to 128048 Oct 13 05:25:39.479804 kernel: loop5: detected capacity change from 0 to 219144 Oct 13 05:25:39.491846 kernel: loop6: detected capacity change from 0 to 110984 Oct 13 05:25:39.501479 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:25:39.506101 (sd-merge)[1320]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 05:25:39.513475 (sd-merge)[1320]: Merged extensions into '/usr'. Oct 13 05:25:39.519443 systemd[1]: Reload requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:25:39.519460 systemd[1]: Reloading... Oct 13 05:25:39.605847 zram_generator::config[1357]: No configuration found. Oct 13 05:25:39.628338 systemd-resolved[1314]: Positive Trust Anchors: Oct 13 05:25:39.628358 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:25:39.628364 systemd-resolved[1314]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:25:39.628406 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:25:39.634845 systemd-resolved[1314]: Defaulting to hostname 'linux'. Oct 13 05:25:39.822205 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:25:39.822644 systemd[1]: Reloading finished in 302 ms. Oct 13 05:25:39.854882 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:25:39.857410 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:25:39.862950 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:25:39.893533 systemd[1]: Starting ensure-sysext.service... Oct 13 05:25:39.896687 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:25:39.919315 systemd[1]: Reload requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:25:39.919467 systemd[1]: Reloading... Oct 13 05:25:39.930458 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:25:39.930496 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:25:39.930930 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:25:39.931228 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:25:39.932575 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:25:39.932882 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Oct 13 05:25:39.932958 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Oct 13 05:25:39.938926 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:25:39.939089 systemd-tmpfiles[1391]: Skipping /boot Oct 13 05:25:39.953101 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:25:39.953982 systemd-tmpfiles[1391]: Skipping /boot Oct 13 05:25:39.971853 zram_generator::config[1418]: No configuration found. Oct 13 05:25:40.325107 systemd[1]: Reloading finished in 405 ms. Oct 13 05:25:40.385399 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:40.385572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:40.387163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:25:40.390197 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:25:40.393537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:25:40.395438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:40.395631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:40.395793 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:40.399221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:40.399391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:40.399556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:40.399699 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:40.399821 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:40.402904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:40.403132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:40.432062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:25:40.469245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:40.469445 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:40.469626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:40.471363 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:25:40.471653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:25:40.474109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:25:40.474363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:25:40.476972 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:25:40.477191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:25:40.479470 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:25:40.479708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:25:40.485607 systemd[1]: Finished ensure-sysext.service. Oct 13 05:25:40.492211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:25:40.492285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:25:40.494886 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:25:40.729151 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:25:40.731386 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:25:40.819044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:25:40.824054 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:25:40.826562 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:25:40.846211 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:25:40.849549 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:25:40.852509 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:25:40.937006 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:25:40.962356 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:25:41.101912 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:25:41.108262 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:25:41.216955 augenrules[1499]: No rules Oct 13 05:25:41.218695 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:25:41.219012 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:25:41.230345 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:25:41.235033 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:25:41.275052 systemd-udevd[1506]: Using default interface naming scheme 'v257'. Oct 13 05:25:41.300346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:25:41.306914 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:25:41.383470 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:25:41.393587 systemd-networkd[1513]: lo: Link UP Oct 13 05:25:41.393597 systemd-networkd[1513]: lo: Gained carrier Oct 13 05:25:41.397853 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:41.397863 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:25:41.400123 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:25:41.400288 systemd-networkd[1513]: eth0: Link UP Oct 13 05:25:41.400810 systemd-networkd[1513]: eth0: Gained carrier Oct 13 05:25:41.400888 systemd-networkd[1513]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:41.405963 systemd[1]: Reached target network.target - Network. Oct 13 05:25:41.410707 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:25:41.424602 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:25:41.453240 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:25:41.461694 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:25:41.515166 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:25:41.515879 systemd-networkd[1513]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:25:41.516837 systemd-timesyncd[1467]: Network configuration changed, trying to establish connection. Oct 13 05:25:43.101114 systemd-resolved[1314]: Clock change detected. Flushing caches. Oct 13 05:25:43.101412 systemd-timesyncd[1467]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:25:43.101532 systemd-timesyncd[1467]: Initial clock synchronization to Mon 2025-10-13 05:25:43.101042 UTC. Oct 13 05:25:43.104081 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:25:43.108957 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:25:43.122804 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 13 05:25:43.129503 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:25:43.129807 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:25:43.137386 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:25:43.168752 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:25:43.217026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:43.236748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:25:43.237270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:43.244213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:43.374842 kernel: kvm_amd: TSC scaling supported Oct 13 05:25:43.374966 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:25:43.374989 kernel: kvm_amd: Nested Paging enabled Oct 13 05:25:43.376997 kernel: kvm_amd: LBR virtualization supported Oct 13 05:25:43.377048 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:25:43.378745 kernel: kvm_amd: Virtual GIF supported Oct 13 05:25:43.380128 ldconfig[1474]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:25:43.393462 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:25:43.398085 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:25:43.425962 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:25:43.429978 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:25:43.448281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:43.452604 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:25:43.454774 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:25:43.457043 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:25:43.459379 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:25:43.461892 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:25:43.464085 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:25:43.466241 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:25:43.468524 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:25:43.468572 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:25:43.470323 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:25:43.474233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:25:43.478958 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:25:43.484074 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:25:43.488822 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:25:43.495188 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:25:43.501314 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:25:43.503834 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:25:43.507256 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:25:43.510415 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:25:43.512633 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:25:43.514809 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:25:43.514978 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:25:43.516843 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:25:43.520469 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:25:43.529993 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:25:43.539029 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:25:43.543156 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:25:43.545032 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:25:43.546760 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:25:43.551402 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:25:43.552624 jq[1577]: false Oct 13 05:25:43.558033 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:25:43.574265 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:25:43.585947 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Oct 13 05:25:43.585671 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Oct 13 05:25:43.587469 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:25:43.597998 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:25:43.603313 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:25:43.604055 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:25:43.607090 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:25:43.616949 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Oct 13 05:25:43.616949 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:25:43.616949 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Oct 13 05:25:43.615184 oslogin_cache_refresh[1579]: Failure getting users, quitting Oct 13 05:25:43.615215 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:25:43.615289 oslogin_cache_refresh[1579]: Refreshing group entry cache Oct 13 05:25:43.630345 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Oct 13 05:25:43.630345 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:25:43.630307 oslogin_cache_refresh[1579]: Failure getting groups, quitting Oct 13 05:25:43.630327 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:25:43.633183 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:25:43.645842 extend-filesystems[1578]: Found /dev/vda6 Oct 13 05:25:43.656358 extend-filesystems[1578]: Found /dev/vda9 Oct 13 05:25:43.659769 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:25:43.665970 extend-filesystems[1578]: Checking size of /dev/vda9 Oct 13 05:25:43.671725 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:25:43.672654 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:25:43.673145 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:25:43.673482 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:25:43.679849 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:25:43.680267 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:25:43.683292 update_engine[1588]: I20251013 05:25:43.683182 1588 main.cc:92] Flatcar Update Engine starting Oct 13 05:25:43.693428 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:25:43.693801 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:25:43.700559 extend-filesystems[1578]: Resized partition /dev/vda9 Oct 13 05:25:43.719551 extend-filesystems[1619]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:25:43.724021 jq[1595]: true Oct 13 05:25:43.736180 (ntainerd)[1614]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:25:43.756353 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 05:25:43.756432 tar[1609]: linux-amd64/LICENSE Oct 13 05:25:43.756432 tar[1609]: linux-amd64/helm Oct 13 05:25:43.783234 jq[1624]: true Oct 13 05:25:43.804025 systemd-logind[1585]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:25:43.804070 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:25:43.807886 systemd-logind[1585]: New seat seat0. Oct 13 05:25:43.820455 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:25:43.835107 dbus-daemon[1575]: [system] SELinux support is enabled Oct 13 05:25:43.836080 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:25:43.844161 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:25:43.844201 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:25:43.853028 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:25:43.881938 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 05:25:43.854242 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:25:43.952116 update_engine[1588]: I20251013 05:25:43.868007 1588 update_check_scheduler.cc:74] Next update check in 4m45s Oct 13 05:25:43.854270 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:25:43.880444 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:25:43.890487 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:25:43.964332 extend-filesystems[1619]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:25:43.964332 extend-filesystems[1619]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:25:43.964332 extend-filesystems[1619]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 05:25:43.991826 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Oct 13 05:25:43.971305 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:25:43.972780 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:25:44.051027 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:25:44.068828 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:25:44.077547 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:25:44.124111 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:25:44.126830 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:25:44.164174 systemd-networkd[1513]: eth0: Gained IPv6LL Oct 13 05:25:44.179754 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:25:44.185764 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:25:44.200252 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:25:44.221242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:25:44.242082 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:25:44.299262 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:25:44.363806 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:25:44.364337 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:25:44.391870 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:25:44.715441 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:25:44.769155 containerd[1614]: time="2025-10-13T05:25:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:25:44.770365 containerd[1614]: time="2025-10-13T05:25:44.770330362Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:25:44.817964 containerd[1614]: time="2025-10-13T05:25:44.817831498Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.166µs" Oct 13 05:25:44.818251 containerd[1614]: time="2025-10-13T05:25:44.818220989Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:25:44.818356 containerd[1614]: time="2025-10-13T05:25:44.818329292Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:25:44.818911 containerd[1614]: time="2025-10-13T05:25:44.818877921Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:25:44.819089 containerd[1614]: time="2025-10-13T05:25:44.819062387Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:25:44.819227 containerd[1614]: time="2025-10-13T05:25:44.819200616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:25:44.819500 containerd[1614]: time="2025-10-13T05:25:44.819469491Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:25:44.819593 containerd[1614]: time="2025-10-13T05:25:44.819566152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:25:44.820389 containerd[1614]: time="2025-10-13T05:25:44.820344502Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:25:44.820483 containerd[1614]: time="2025-10-13T05:25:44.820458516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:25:44.820581 containerd[1614]: time="2025-10-13T05:25:44.820552483Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:25:44.820708 containerd[1614]: time="2025-10-13T05:25:44.820682446Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:25:44.821263 containerd[1614]: time="2025-10-13T05:25:44.821207011Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:25:44.822094 containerd[1614]: time="2025-10-13T05:25:44.822067054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:25:44.822404 containerd[1614]: time="2025-10-13T05:25:44.822329777Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:25:44.822521 containerd[1614]: time="2025-10-13T05:25:44.822494416Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:25:44.823123 containerd[1614]: time="2025-10-13T05:25:44.823058063Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:25:44.824095 containerd[1614]: time="2025-10-13T05:25:44.824064011Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:25:44.824648 containerd[1614]: time="2025-10-13T05:25:44.824621697Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:25:44.942877 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:25:44.961446 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:25:44.983692 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:58964.service - OpenSSH per-connection server daemon (10.0.0.1:58964). Oct 13 05:25:45.001809 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:25:45.003903 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:25:45.082330 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:25:45.087412 containerd[1614]: time="2025-10-13T05:25:45.087360383Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:25:45.087769 containerd[1614]: time="2025-10-13T05:25:45.087734756Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:25:45.087861 containerd[1614]: time="2025-10-13T05:25:45.087845333Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:25:45.087965 containerd[1614]: time="2025-10-13T05:25:45.087948467Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:25:45.088038 containerd[1614]: time="2025-10-13T05:25:45.088024229Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:25:45.088094 containerd[1614]: time="2025-10-13T05:25:45.088081366Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:25:45.088150 containerd[1614]: time="2025-10-13T05:25:45.088137381Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:25:45.088241 containerd[1614]: time="2025-10-13T05:25:45.088224204Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:25:45.088306 containerd[1614]: time="2025-10-13T05:25:45.088292582Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:25:45.088361 containerd[1614]: time="2025-10-13T05:25:45.088348557Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:25:45.088453 containerd[1614]: time="2025-10-13T05:25:45.088435510Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:25:45.088521 containerd[1614]: time="2025-10-13T05:25:45.088507966Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:25:45.092340 containerd[1614]: time="2025-10-13T05:25:45.092318515Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:25:45.093860 containerd[1614]: time="2025-10-13T05:25:45.093836363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:25:45.093954 containerd[1614]: time="2025-10-13T05:25:45.093938805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:25:45.094012 containerd[1614]: time="2025-10-13T05:25:45.093999449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:25:45.094089 containerd[1614]: time="2025-10-13T05:25:45.094075221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:25:45.094146 containerd[1614]: time="2025-10-13T05:25:45.094133771Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:25:45.094209 containerd[1614]: time="2025-10-13T05:25:45.094196017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:25:45.094277 containerd[1614]: time="2025-10-13T05:25:45.094263424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:25:45.094341 containerd[1614]: time="2025-10-13T05:25:45.094327865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:25:45.094397 containerd[1614]: time="2025-10-13T05:25:45.094384711Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:25:45.094451 containerd[1614]: time="2025-10-13T05:25:45.094439053Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:25:45.094629 containerd[1614]: time="2025-10-13T05:25:45.094600336Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:25:45.094771 containerd[1614]: time="2025-10-13T05:25:45.094696426Z" level=info msg="Start snapshots syncer" Oct 13 05:25:45.097878 containerd[1614]: time="2025-10-13T05:25:45.097517970Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:25:45.108957 containerd[1614]: time="2025-10-13T05:25:45.108482270Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:25:45.108957 containerd[1614]: time="2025-10-13T05:25:45.108672316Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.108838728Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113274330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113336256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113372674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113401979Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113438728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113466661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113498691Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.113575405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.114946457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.114974399Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.115421889Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.115462966Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:25:45.117426 containerd[1614]: time="2025-10-13T05:25:45.115479627Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115499154Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115513070Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115529090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115545020Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115572091Z" level=info msg="runtime interface created" Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115579274Z" level=info msg="created NRI interface" Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115593531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115613168Z" level=info msg="Connect containerd service" Oct 13 05:25:45.119054 containerd[1614]: time="2025-10-13T05:25:45.115669453Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:25:45.124837 containerd[1614]: time="2025-10-13T05:25:45.122257934Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:25:45.177912 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:25:45.197393 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:25:45.209363 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:25:45.214254 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:25:45.417302 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 58964 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:45.429180 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:45.447971 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:25:45.473329 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:25:45.503542 tar[1609]: linux-amd64/README.md Oct 13 05:25:45.518697 systemd-logind[1585]: New session 1 of user core. Oct 13 05:25:45.578813 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:25:45.587678 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:25:45.621071 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:25:45.693197 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:25:45.712724 systemd-logind[1585]: New session c1 of user core. Oct 13 05:25:45.825445 containerd[1614]: time="2025-10-13T05:25:45.825242603Z" level=info msg="Start subscribing containerd event" Oct 13 05:25:45.825445 containerd[1614]: time="2025-10-13T05:25:45.825376895Z" level=info msg="Start recovering state" Oct 13 05:25:45.825995 containerd[1614]: time="2025-10-13T05:25:45.825296795Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:25:45.825995 containerd[1614]: time="2025-10-13T05:25:45.825657421Z" level=info msg="Start event monitor" Oct 13 05:25:45.825995 containerd[1614]: time="2025-10-13T05:25:45.825692707Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:25:45.825995 containerd[1614]: time="2025-10-13T05:25:45.825710481Z" level=info msg="Start streaming server" Oct 13 05:25:45.825995 containerd[1614]: time="2025-10-13T05:25:45.825740357Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:25:45.825995 containerd[1614]: time="2025-10-13T05:25:45.825797103Z" level=info msg="runtime interface starting up..." Oct 13 05:25:45.825995 containerd[1614]: time="2025-10-13T05:25:45.825814215Z" level=info msg="starting plugins..." Oct 13 05:25:45.826170 containerd[1614]: time="2025-10-13T05:25:45.825889988Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:25:45.826287 containerd[1614]: time="2025-10-13T05:25:45.826260272Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:25:45.826500 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:25:45.829031 containerd[1614]: time="2025-10-13T05:25:45.829000373Z" level=info msg="containerd successfully booted in 1.060553s" Oct 13 05:25:45.990491 systemd[1715]: Queued start job for default target default.target. Oct 13 05:25:45.997888 systemd[1715]: Created slice app.slice - User Application Slice. Oct 13 05:25:45.998055 systemd[1715]: Reached target paths.target - Paths. Oct 13 05:25:45.998110 systemd[1715]: Reached target timers.target - Timers. Oct 13 05:25:46.006422 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:25:46.046647 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:25:46.046859 systemd[1715]: Reached target sockets.target - Sockets. Oct 13 05:25:46.046964 systemd[1715]: Reached target basic.target - Basic System. Oct 13 05:25:46.047019 systemd[1715]: Reached target default.target - Main User Target. Oct 13 05:25:46.047067 systemd[1715]: Startup finished in 309ms. Oct 13 05:25:46.047143 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:25:46.082346 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:25:46.185992 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:58974.service - OpenSSH per-connection server daemon (10.0.0.1:58974). Oct 13 05:25:46.314721 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 58974 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:46.318833 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:46.334177 systemd-logind[1585]: New session 2 of user core. Oct 13 05:25:46.357256 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:25:46.432687 sshd[1734]: Connection closed by 10.0.0.1 port 58974 Oct 13 05:25:46.432869 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:46.450630 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:58974.service: Deactivated successfully. Oct 13 05:25:46.455237 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:25:46.457472 systemd-logind[1585]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:25:46.467480 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:58990.service - OpenSSH per-connection server daemon (10.0.0.1:58990). Oct 13 05:25:46.471256 systemd-logind[1585]: Removed session 2. Oct 13 05:25:46.560957 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:46.564386 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:46.577052 systemd-logind[1585]: New session 3 of user core. Oct 13 05:25:46.593077 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:25:46.741724 sshd[1743]: Connection closed by 10.0.0.1 port 58990 Oct 13 05:25:46.742443 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:46.751029 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:58990.service: Deactivated successfully. Oct 13 05:25:46.753560 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:25:46.761432 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:25:46.765462 systemd-logind[1585]: Removed session 3. Oct 13 05:25:47.510849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:25:47.567192 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:25:47.571335 systemd[1]: Startup finished in 3.232s (kernel) + 8.451s (initrd) + 8.410s (userspace) = 20.095s. Oct 13 05:25:47.584438 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:25:48.698076 kubelet[1753]: E1013 05:25:48.697996 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:25:48.702797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:25:48.703086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:25:48.703595 systemd[1]: kubelet.service: Consumed 2.042s CPU time, 258.3M memory peak. Oct 13 05:25:56.753978 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:45034.service - OpenSSH per-connection server daemon (10.0.0.1:45034). Oct 13 05:25:56.850676 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 45034 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:56.852266 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:56.857123 systemd-logind[1585]: New session 4 of user core. Oct 13 05:25:56.866104 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:25:56.923246 sshd[1769]: Connection closed by 10.0.0.1 port 45034 Oct 13 05:25:56.923651 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:56.932390 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:45034.service: Deactivated successfully. Oct 13 05:25:56.934914 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:25:56.935883 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:25:56.940194 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:47686.service - OpenSSH per-connection server daemon (10.0.0.1:47686). Oct 13 05:25:56.941014 systemd-logind[1585]: Removed session 4. Oct 13 05:25:56.999954 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 47686 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:57.001560 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:57.006261 systemd-logind[1585]: New session 5 of user core. Oct 13 05:25:57.016046 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:25:57.065226 sshd[1778]: Connection closed by 10.0.0.1 port 47686 Oct 13 05:25:57.065550 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:57.081866 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:47686.service: Deactivated successfully. Oct 13 05:25:57.083712 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:25:57.084416 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:25:57.086901 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:47696.service - OpenSSH per-connection server daemon (10.0.0.1:47696). Oct 13 05:25:57.087742 systemd-logind[1585]: Removed session 5. Oct 13 05:25:57.142977 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 47696 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:57.144641 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:57.149017 systemd-logind[1585]: New session 6 of user core. Oct 13 05:25:57.159050 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:25:57.212745 sshd[1787]: Connection closed by 10.0.0.1 port 47696 Oct 13 05:25:57.213155 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:57.225581 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:47696.service: Deactivated successfully. Oct 13 05:25:57.227360 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:25:57.228257 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:25:57.231037 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:47700.service - OpenSSH per-connection server daemon (10.0.0.1:47700). Oct 13 05:25:57.231810 systemd-logind[1585]: Removed session 6. Oct 13 05:25:57.297767 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 47700 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:57.299452 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:57.303961 systemd-logind[1585]: New session 7 of user core. Oct 13 05:25:57.314078 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:25:57.380273 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:25:57.380603 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:57.401577 sudo[1797]: pam_unix(sudo:session): session closed for user root Oct 13 05:25:57.403667 sshd[1796]: Connection closed by 10.0.0.1 port 47700 Oct 13 05:25:57.404081 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:57.419770 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:47700.service: Deactivated successfully. Oct 13 05:25:57.421799 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:25:57.422728 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:25:57.425869 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:47708.service - OpenSSH per-connection server daemon (10.0.0.1:47708). Oct 13 05:25:57.426775 systemd-logind[1585]: Removed session 7. Oct 13 05:25:57.488739 sshd[1803]: Accepted publickey for core from 10.0.0.1 port 47708 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:57.490691 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:57.496039 systemd-logind[1585]: New session 8 of user core. Oct 13 05:25:57.504138 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:25:57.562320 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:25:57.562714 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:57.571965 sudo[1808]: pam_unix(sudo:session): session closed for user root Oct 13 05:25:57.581689 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:25:57.582024 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:57.595223 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:25:57.649278 augenrules[1830]: No rules Oct 13 05:25:57.650977 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:25:57.651352 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:25:57.652591 sudo[1807]: pam_unix(sudo:session): session closed for user root Oct 13 05:25:57.654342 sshd[1806]: Connection closed by 10.0.0.1 port 47708 Oct 13 05:25:57.654767 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:57.668155 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:47708.service: Deactivated successfully. Oct 13 05:25:57.670365 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:25:57.671295 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:25:57.674495 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:47724.service - OpenSSH per-connection server daemon (10.0.0.1:47724). Oct 13 05:25:57.675492 systemd-logind[1585]: Removed session 8. Oct 13 05:25:57.741520 sshd[1839]: Accepted publickey for core from 10.0.0.1 port 47724 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:57.743022 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:57.747494 systemd-logind[1585]: New session 9 of user core. Oct 13 05:25:57.762144 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:25:57.821131 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:25:57.821536 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:58.501337 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:25:58.531292 (dockerd)[1863]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:25:58.834704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:25:58.836938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:25:59.021743 dockerd[1863]: time="2025-10-13T05:25:59.021656167Z" level=info msg="Starting up" Oct 13 05:25:59.022626 dockerd[1863]: time="2025-10-13T05:25:59.022564561Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:25:59.050324 dockerd[1863]: time="2025-10-13T05:25:59.050255351Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:25:59.120787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:25:59.138207 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:25:59.600001 kubelet[1897]: E1013 05:25:59.599903 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:25:59.606098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:25:59.606304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:25:59.606693 systemd[1]: kubelet.service: Consumed 277ms CPU time, 110.6M memory peak. Oct 13 05:25:59.683386 dockerd[1863]: time="2025-10-13T05:25:59.683321098Z" level=info msg="Loading containers: start." Oct 13 05:25:59.694954 kernel: Initializing XFRM netlink socket Oct 13 05:26:00.458483 systemd-networkd[1513]: docker0: Link UP Oct 13 05:26:00.463356 dockerd[1863]: time="2025-10-13T05:26:00.463311004Z" level=info msg="Loading containers: done." Oct 13 05:26:00.483483 dockerd[1863]: time="2025-10-13T05:26:00.483416133Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:26:00.483655 dockerd[1863]: time="2025-10-13T05:26:00.483539444Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:26:00.483682 dockerd[1863]: time="2025-10-13T05:26:00.483670701Z" level=info msg="Initializing buildkit" Oct 13 05:26:00.484357 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1527496444-merged.mount: Deactivated successfully. Oct 13 05:26:00.516165 dockerd[1863]: time="2025-10-13T05:26:00.516091504Z" level=info msg="Completed buildkit initialization" Oct 13 05:26:00.523894 dockerd[1863]: time="2025-10-13T05:26:00.523857444Z" level=info msg="Daemon has completed initialization" Oct 13 05:26:00.524030 dockerd[1863]: time="2025-10-13T05:26:00.523961068Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:26:00.524200 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:26:01.693799 containerd[1614]: time="2025-10-13T05:26:01.693703448Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:26:03.059427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount420413225.mount: Deactivated successfully. Oct 13 05:26:05.108963 containerd[1614]: time="2025-10-13T05:26:05.108882829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:05.110219 containerd[1614]: time="2025-10-13T05:26:05.110153673Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 05:26:05.112133 containerd[1614]: time="2025-10-13T05:26:05.112086789Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:05.115705 containerd[1614]: time="2025-10-13T05:26:05.115640186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:05.116703 containerd[1614]: time="2025-10-13T05:26:05.116638950Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 3.422872533s" Oct 13 05:26:05.116703 containerd[1614]: time="2025-10-13T05:26:05.116682992Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:26:05.117401 containerd[1614]: time="2025-10-13T05:26:05.117363439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:26:06.917996 containerd[1614]: time="2025-10-13T05:26:06.917600382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:06.918441 containerd[1614]: time="2025-10-13T05:26:06.918289064Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 05:26:06.919710 containerd[1614]: time="2025-10-13T05:26:06.919673201Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:06.923163 containerd[1614]: time="2025-10-13T05:26:06.923096443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:06.924350 containerd[1614]: time="2025-10-13T05:26:06.924296705Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.806899152s" Oct 13 05:26:06.924350 containerd[1614]: time="2025-10-13T05:26:06.924346017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:26:06.924956 containerd[1614]: time="2025-10-13T05:26:06.924932768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:26:08.748909 containerd[1614]: time="2025-10-13T05:26:08.748836532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:08.749869 containerd[1614]: time="2025-10-13T05:26:08.749799288Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 05:26:08.751213 containerd[1614]: time="2025-10-13T05:26:08.751159930Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:08.755794 containerd[1614]: time="2025-10-13T05:26:08.755719735Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.830708159s" Oct 13 05:26:08.755876 containerd[1614]: time="2025-10-13T05:26:08.755800657Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:26:08.756854 containerd[1614]: time="2025-10-13T05:26:08.756791796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:08.757770 containerd[1614]: time="2025-10-13T05:26:08.757732912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:26:09.637555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:26:09.641569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:10.126246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:10.137446 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:26:10.143712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47724075.mount: Deactivated successfully. Oct 13 05:26:10.704321 kubelet[2178]: E1013 05:26:10.704208 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:26:10.709175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:26:10.709445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:26:10.709855 systemd[1]: kubelet.service: Consumed 510ms CPU time, 112.3M memory peak. Oct 13 05:26:11.652009 containerd[1614]: time="2025-10-13T05:26:11.651894926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:11.652603 containerd[1614]: time="2025-10-13T05:26:11.652567198Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 05:26:11.653673 containerd[1614]: time="2025-10-13T05:26:11.653638588Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:11.655572 containerd[1614]: time="2025-10-13T05:26:11.655505540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:11.655922 containerd[1614]: time="2025-10-13T05:26:11.655885653Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.898120872s" Oct 13 05:26:11.655960 containerd[1614]: time="2025-10-13T05:26:11.655932251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:26:11.656774 containerd[1614]: time="2025-10-13T05:26:11.656603550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:26:12.264426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791683473.mount: Deactivated successfully. Oct 13 05:26:13.393246 containerd[1614]: time="2025-10-13T05:26:13.393164893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:13.393829 containerd[1614]: time="2025-10-13T05:26:13.393758607Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 05:26:13.395371 containerd[1614]: time="2025-10-13T05:26:13.395307874Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:13.400375 containerd[1614]: time="2025-10-13T05:26:13.400295952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:13.401509 containerd[1614]: time="2025-10-13T05:26:13.401453363Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.744822191s" Oct 13 05:26:13.401509 containerd[1614]: time="2025-10-13T05:26:13.401500060Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:26:13.402576 containerd[1614]: time="2025-10-13T05:26:13.402543298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:26:15.062189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount431403147.mount: Deactivated successfully. Oct 13 05:26:15.069860 containerd[1614]: time="2025-10-13T05:26:15.069774863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:15.070980 containerd[1614]: time="2025-10-13T05:26:15.070929489Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 05:26:15.072322 containerd[1614]: time="2025-10-13T05:26:15.072244947Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:15.074203 containerd[1614]: time="2025-10-13T05:26:15.074141766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:15.074740 containerd[1614]: time="2025-10-13T05:26:15.074696036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.67212145s" Oct 13 05:26:15.074740 containerd[1614]: time="2025-10-13T05:26:15.074727254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:26:15.075575 containerd[1614]: time="2025-10-13T05:26:15.075350223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:26:18.563070 containerd[1614]: time="2025-10-13T05:26:18.562999997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:18.576753 containerd[1614]: time="2025-10-13T05:26:18.576651341Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 05:26:18.589758 containerd[1614]: time="2025-10-13T05:26:18.589680831Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:18.608439 containerd[1614]: time="2025-10-13T05:26:18.608349989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:18.610217 containerd[1614]: time="2025-10-13T05:26:18.610112602Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.534719618s" Oct 13 05:26:18.610217 containerd[1614]: time="2025-10-13T05:26:18.610213965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:26:20.834574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 05:26:20.836719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:21.076444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:21.096230 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:26:21.140697 kubelet[2320]: E1013 05:26:21.140619 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:26:21.145404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:26:21.145607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:26:21.146116 systemd[1]: kubelet.service: Consumed 242ms CPU time, 110.2M memory peak. Oct 13 05:26:23.327968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:23.328192 systemd[1]: kubelet.service: Consumed 242ms CPU time, 110.2M memory peak. Oct 13 05:26:23.330852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:23.360164 systemd[1]: Reload requested from client PID 2336 ('systemctl') (unit session-9.scope)... Oct 13 05:26:23.360181 systemd[1]: Reloading... Oct 13 05:26:23.461000 zram_generator::config[2382]: No configuration found. Oct 13 05:26:24.945594 systemd[1]: Reloading finished in 1585 ms. Oct 13 05:26:25.001843 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:26:25.001980 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:26:25.002303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:25.002351 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.1M memory peak. Oct 13 05:26:25.004109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:25.187775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:25.193184 (kubelet)[2427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:26:25.236754 kubelet[2427]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:26:25.236754 kubelet[2427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:26:25.236754 kubelet[2427]: I1013 05:26:25.236715 2427 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:26:26.170878 kubelet[2427]: I1013 05:26:26.170782 2427 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:26:26.170878 kubelet[2427]: I1013 05:26:26.170833 2427 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:26:26.173093 kubelet[2427]: I1013 05:26:26.173053 2427 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:26:26.173093 kubelet[2427]: I1013 05:26:26.173083 2427 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:26:26.173456 kubelet[2427]: I1013 05:26:26.173422 2427 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:26:26.615934 kubelet[2427]: E1013 05:26:26.615828 2427 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:26:26.616459 kubelet[2427]: I1013 05:26:26.616104 2427 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:26:26.619737 kubelet[2427]: I1013 05:26:26.619698 2427 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:26:26.625772 kubelet[2427]: I1013 05:26:26.625724 2427 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:26:26.626086 kubelet[2427]: I1013 05:26:26.626048 2427 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:26:26.626235 kubelet[2427]: I1013 05:26:26.626081 2427 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:26:26.626366 kubelet[2427]: I1013 05:26:26.626243 2427 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:26:26.626366 kubelet[2427]: I1013 05:26:26.626252 2427 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:26:26.626421 kubelet[2427]: I1013 05:26:26.626374 2427 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:26:26.632025 kubelet[2427]: I1013 05:26:26.631980 2427 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:26.632264 kubelet[2427]: I1013 05:26:26.632232 2427 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:26:26.632264 kubelet[2427]: I1013 05:26:26.632256 2427 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:26:26.632313 kubelet[2427]: I1013 05:26:26.632303 2427 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:26:26.632372 kubelet[2427]: I1013 05:26:26.632345 2427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:26:26.633165 kubelet[2427]: E1013 05:26:26.633117 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:26:26.633320 kubelet[2427]: E1013 05:26:26.633249 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:26:26.635529 kubelet[2427]: I1013 05:26:26.635501 2427 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:26:26.636067 kubelet[2427]: I1013 05:26:26.636043 2427 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:26:26.636108 kubelet[2427]: I1013 05:26:26.636074 2427 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:26:26.636142 kubelet[2427]: W1013 05:26:26.636123 2427 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:26:26.640493 kubelet[2427]: I1013 05:26:26.640447 2427 server.go:1262] "Started kubelet" Oct 13 05:26:26.640598 kubelet[2427]: I1013 05:26:26.640525 2427 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:26:26.640742 kubelet[2427]: I1013 05:26:26.640705 2427 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:26:26.640821 kubelet[2427]: I1013 05:26:26.640755 2427 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:26:26.641293 kubelet[2427]: I1013 05:26:26.641265 2427 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:26:26.641556 kubelet[2427]: I1013 05:26:26.641525 2427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:26:26.642205 kubelet[2427]: I1013 05:26:26.642187 2427 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:26:26.643219 kubelet[2427]: I1013 05:26:26.643199 2427 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:26:26.645623 kubelet[2427]: E1013 05:26:26.645590 2427 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:26:26.645711 kubelet[2427]: I1013 05:26:26.645700 2427 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:26:26.646009 kubelet[2427]: I1013 05:26:26.645978 2427 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:26:26.646360 kubelet[2427]: I1013 05:26:26.646342 2427 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:26:26.646626 kubelet[2427]: E1013 05:26:26.645408 2427 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df5b4e4086ecf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:26:26.640408271 +0000 UTC m=+1.442800119,LastTimestamp:2025-10-13 05:26:26.640408271 +0000 UTC m=+1.442800119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:26:26.646858 kubelet[2427]: I1013 05:26:26.646833 2427 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:26:26.646904 kubelet[2427]: E1013 05:26:26.646861 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Oct 13 05:26:26.647136 kubelet[2427]: E1013 05:26:26.647112 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:26:26.647761 kubelet[2427]: I1013 05:26:26.647682 2427 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:26:26.648991 kubelet[2427]: I1013 05:26:26.648815 2427 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:26:26.651088 kubelet[2427]: E1013 05:26:26.651053 2427 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:26:26.663367 kubelet[2427]: I1013 05:26:26.663336 2427 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:26:26.663367 kubelet[2427]: I1013 05:26:26.663354 2427 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:26:26.663367 kubelet[2427]: I1013 05:26:26.663371 2427 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:26.667872 kubelet[2427]: I1013 05:26:26.667846 2427 policy_none.go:49] "None policy: Start" Oct 13 05:26:26.667872 kubelet[2427]: I1013 05:26:26.667871 2427 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:26:26.668002 kubelet[2427]: I1013 05:26:26.667886 2427 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:26:26.672195 kubelet[2427]: I1013 05:26:26.672053 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:26:26.672658 kubelet[2427]: I1013 05:26:26.672631 2427 policy_none.go:47] "Start" Oct 13 05:26:26.673670 kubelet[2427]: I1013 05:26:26.673630 2427 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:26:26.673670 kubelet[2427]: I1013 05:26:26.673668 2427 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:26:26.673754 kubelet[2427]: I1013 05:26:26.673702 2427 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:26:26.673780 kubelet[2427]: E1013 05:26:26.673752 2427 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:26:26.675573 kubelet[2427]: E1013 05:26:26.674572 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:26:26.680335 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:26:26.699630 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:26:26.704731 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:26:26.719193 kubelet[2427]: E1013 05:26:26.719153 2427 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:26:26.719453 kubelet[2427]: I1013 05:26:26.719425 2427 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:26:26.719496 kubelet[2427]: I1013 05:26:26.719441 2427 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:26:26.719850 kubelet[2427]: I1013 05:26:26.719818 2427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:26:26.721092 kubelet[2427]: E1013 05:26:26.721053 2427 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:26:26.721176 kubelet[2427]: E1013 05:26:26.721120 2427 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:26:26.788531 systemd[1]: Created slice kubepods-burstable-pode07c9a29824292844959c9351e9aa855.slice - libcontainer container kubepods-burstable-pode07c9a29824292844959c9351e9aa855.slice. Oct 13 05:26:26.818941 kubelet[2427]: E1013 05:26:26.818821 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:26.821519 kubelet[2427]: I1013 05:26:26.820961 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:26.821519 kubelet[2427]: E1013 05:26:26.821446 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Oct 13 05:26:26.823560 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 05:26:26.838473 kubelet[2427]: E1013 05:26:26.838427 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:26.841208 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 05:26:26.843528 kubelet[2427]: E1013 05:26:26.843487 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:26.846860 kubelet[2427]: I1013 05:26:26.846817 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e07c9a29824292844959c9351e9aa855-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07c9a29824292844959c9351e9aa855\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:26.846860 kubelet[2427]: I1013 05:26:26.846846 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e07c9a29824292844959c9351e9aa855-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07c9a29824292844959c9351e9aa855\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:26.847024 kubelet[2427]: I1013 05:26:26.846867 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:26.847024 kubelet[2427]: I1013 05:26:26.846901 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e07c9a29824292844959c9351e9aa855-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e07c9a29824292844959c9351e9aa855\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:26.847024 kubelet[2427]: I1013 05:26:26.846994 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:26.847136 kubelet[2427]: I1013 05:26:26.847044 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:26.847136 kubelet[2427]: I1013 05:26:26.847066 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:26.847136 kubelet[2427]: I1013 05:26:26.847092 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:26.847249 kubelet[2427]: I1013 05:26:26.847133 2427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:26.847432 kubelet[2427]: E1013 05:26:26.847391 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Oct 13 05:26:27.024296 kubelet[2427]: I1013 05:26:27.023820 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:27.024701 kubelet[2427]: E1013 05:26:27.024657 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Oct 13 05:26:27.127077 kubelet[2427]: E1013 05:26:27.126967 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:27.128315 containerd[1614]: time="2025-10-13T05:26:27.128246796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e07c9a29824292844959c9351e9aa855,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:27.143353 kubelet[2427]: E1013 05:26:27.143285 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:27.144066 containerd[1614]: time="2025-10-13T05:26:27.143998087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:27.148281 kubelet[2427]: E1013 05:26:27.148244 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:27.148848 containerd[1614]: time="2025-10-13T05:26:27.148799822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:27.248487 kubelet[2427]: E1013 05:26:27.248402 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Oct 13 05:26:27.427208 kubelet[2427]: I1013 05:26:27.427165 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:27.427776 kubelet[2427]: E1013 05:26:27.427695 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Oct 13 05:26:27.573948 kubelet[2427]: E1013 05:26:27.573837 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:26:27.790812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1432690438.mount: Deactivated successfully. Oct 13 05:26:27.797063 kubelet[2427]: E1013 05:26:27.797001 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:26:27.800823 containerd[1614]: time="2025-10-13T05:26:27.800752770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:27.804722 containerd[1614]: time="2025-10-13T05:26:27.804632218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:26:27.806250 containerd[1614]: time="2025-10-13T05:26:27.806149942Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:27.808473 containerd[1614]: time="2025-10-13T05:26:27.808394854Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:27.809514 containerd[1614]: time="2025-10-13T05:26:27.809432689Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:26:27.810826 containerd[1614]: time="2025-10-13T05:26:27.810779640Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:27.812193 containerd[1614]: time="2025-10-13T05:26:27.812145978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:26:27.813348 containerd[1614]: time="2025-10-13T05:26:27.813298420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:27.817166 containerd[1614]: time="2025-10-13T05:26:27.817093678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 664.085367ms" Oct 13 05:26:27.818067 containerd[1614]: time="2025-10-13T05:26:27.818016656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 686.533952ms" Oct 13 05:26:27.826817 containerd[1614]: time="2025-10-13T05:26:27.826730499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 677.345991ms" Oct 13 05:26:27.863798 kubelet[2427]: E1013 05:26:27.862886 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:26:28.006288 containerd[1614]: time="2025-10-13T05:26:28.006135446Z" level=info msg="connecting to shim ba605503c5e37dfa71f874e0bf2cdfe442d8222b5d4a02cda522c4b93b28a4aa" address="unix:///run/containerd/s/e8a4f7feff7de17c19bd2e86c76a1ee37a3652e40a200e4717eab66c27279ece" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:28.019822 containerd[1614]: time="2025-10-13T05:26:28.019246822Z" level=info msg="connecting to shim 96c7cdf8908fb1cd68e7cc04d1e80210c07c0cef49537f9ebc5348dc2b6477ce" address="unix:///run/containerd/s/121d2ecf3a2445be299f8de2fc7c0a4e3009e8a73b983f9d9d4757260af4300a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:28.019822 containerd[1614]: time="2025-10-13T05:26:28.019248255Z" level=info msg="connecting to shim 0ec1f6c7de5327b9021380b84d003a47a78d6be798d59e112870e0ca9d230134" address="unix:///run/containerd/s/50ea442c043ff4634d84b7d94b7ee071516044dfacc6c1344547a43ccd7a50d0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:28.049214 kubelet[2427]: E1013 05:26:28.049079 2427 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Oct 13 05:26:28.051249 systemd[1]: Started cri-containerd-ba605503c5e37dfa71f874e0bf2cdfe442d8222b5d4a02cda522c4b93b28a4aa.scope - libcontainer container ba605503c5e37dfa71f874e0bf2cdfe442d8222b5d4a02cda522c4b93b28a4aa. Oct 13 05:26:28.069370 systemd[1]: Started cri-containerd-0ec1f6c7de5327b9021380b84d003a47a78d6be798d59e112870e0ca9d230134.scope - libcontainer container 0ec1f6c7de5327b9021380b84d003a47a78d6be798d59e112870e0ca9d230134. Oct 13 05:26:28.090146 systemd[1]: Started cri-containerd-96c7cdf8908fb1cd68e7cc04d1e80210c07c0cef49537f9ebc5348dc2b6477ce.scope - libcontainer container 96c7cdf8908fb1cd68e7cc04d1e80210c07c0cef49537f9ebc5348dc2b6477ce. Oct 13 05:26:28.178740 containerd[1614]: time="2025-10-13T05:26:28.178662939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e07c9a29824292844959c9351e9aa855,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba605503c5e37dfa71f874e0bf2cdfe442d8222b5d4a02cda522c4b93b28a4aa\"" Oct 13 05:26:28.180232 kubelet[2427]: E1013 05:26:28.180202 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:28.180932 containerd[1614]: time="2025-10-13T05:26:28.180890827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ec1f6c7de5327b9021380b84d003a47a78d6be798d59e112870e0ca9d230134\"" Oct 13 05:26:28.181788 kubelet[2427]: E1013 05:26:28.181765 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:28.181938 containerd[1614]: time="2025-10-13T05:26:28.181891250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"96c7cdf8908fb1cd68e7cc04d1e80210c07c0cef49537f9ebc5348dc2b6477ce\"" Oct 13 05:26:28.183189 kubelet[2427]: E1013 05:26:28.183161 2427 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:26:28.183347 kubelet[2427]: E1013 05:26:28.183285 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:28.186688 containerd[1614]: time="2025-10-13T05:26:28.186642082Z" level=info msg="CreateContainer within sandbox \"ba605503c5e37dfa71f874e0bf2cdfe442d8222b5d4a02cda522c4b93b28a4aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:26:28.188621 containerd[1614]: time="2025-10-13T05:26:28.188582745Z" level=info msg="CreateContainer within sandbox \"0ec1f6c7de5327b9021380b84d003a47a78d6be798d59e112870e0ca9d230134\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:26:28.197154 containerd[1614]: time="2025-10-13T05:26:28.197115857Z" level=info msg="Container 47974150ad7eae27bcf6ef8df0a488e4b677b5ac747b633fbc12e3c9ec22ad0c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:28.202682 containerd[1614]: time="2025-10-13T05:26:28.202625325Z" level=info msg="CreateContainer within sandbox \"96c7cdf8908fb1cd68e7cc04d1e80210c07c0cef49537f9ebc5348dc2b6477ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:26:28.229161 kubelet[2427]: I1013 05:26:28.229129 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:28.229611 kubelet[2427]: E1013 05:26:28.229564 2427 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Oct 13 05:26:28.239727 containerd[1614]: time="2025-10-13T05:26:28.239677616Z" level=info msg="Container dbabf9099998f44accbc3be24b5614bc0890d40dbd61085b7847ba8b5b52608e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:28.252860 containerd[1614]: time="2025-10-13T05:26:28.252787730Z" level=info msg="CreateContainer within sandbox \"0ec1f6c7de5327b9021380b84d003a47a78d6be798d59e112870e0ca9d230134\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dbabf9099998f44accbc3be24b5614bc0890d40dbd61085b7847ba8b5b52608e\"" Oct 13 05:26:28.253681 containerd[1614]: time="2025-10-13T05:26:28.253639652Z" level=info msg="StartContainer for \"dbabf9099998f44accbc3be24b5614bc0890d40dbd61085b7847ba8b5b52608e\"" Oct 13 05:26:28.254226 containerd[1614]: time="2025-10-13T05:26:28.254173062Z" level=info msg="CreateContainer within sandbox \"ba605503c5e37dfa71f874e0bf2cdfe442d8222b5d4a02cda522c4b93b28a4aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"47974150ad7eae27bcf6ef8df0a488e4b677b5ac747b633fbc12e3c9ec22ad0c\"" Oct 13 05:26:28.254737 containerd[1614]: time="2025-10-13T05:26:28.254694109Z" level=info msg="StartContainer for \"47974150ad7eae27bcf6ef8df0a488e4b677b5ac747b633fbc12e3c9ec22ad0c\"" Oct 13 05:26:28.255857 containerd[1614]: time="2025-10-13T05:26:28.255829568Z" level=info msg="connecting to shim dbabf9099998f44accbc3be24b5614bc0890d40dbd61085b7847ba8b5b52608e" address="unix:///run/containerd/s/50ea442c043ff4634d84b7d94b7ee071516044dfacc6c1344547a43ccd7a50d0" protocol=ttrpc version=3 Oct 13 05:26:28.256070 containerd[1614]: time="2025-10-13T05:26:28.256032512Z" level=info msg="connecting to shim 47974150ad7eae27bcf6ef8df0a488e4b677b5ac747b633fbc12e3c9ec22ad0c" address="unix:///run/containerd/s/e8a4f7feff7de17c19bd2e86c76a1ee37a3652e40a200e4717eab66c27279ece" protocol=ttrpc version=3 Oct 13 05:26:28.259240 containerd[1614]: time="2025-10-13T05:26:28.259200227Z" level=info msg="Container 0db2f75e6b16df41ffe316f031039c3f7a7db97e8291262c6b16cf2a25db61f0: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:28.269530 containerd[1614]: time="2025-10-13T05:26:28.269478342Z" level=info msg="CreateContainer within sandbox \"96c7cdf8908fb1cd68e7cc04d1e80210c07c0cef49537f9ebc5348dc2b6477ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0db2f75e6b16df41ffe316f031039c3f7a7db97e8291262c6b16cf2a25db61f0\"" Oct 13 05:26:28.273946 containerd[1614]: time="2025-10-13T05:26:28.272351299Z" level=info msg="StartContainer for \"0db2f75e6b16df41ffe316f031039c3f7a7db97e8291262c6b16cf2a25db61f0\"" Oct 13 05:26:28.273946 containerd[1614]: time="2025-10-13T05:26:28.273543937Z" level=info msg="connecting to shim 0db2f75e6b16df41ffe316f031039c3f7a7db97e8291262c6b16cf2a25db61f0" address="unix:///run/containerd/s/121d2ecf3a2445be299f8de2fc7c0a4e3009e8a73b983f9d9d4757260af4300a" protocol=ttrpc version=3 Oct 13 05:26:28.277212 systemd[1]: Started cri-containerd-47974150ad7eae27bcf6ef8df0a488e4b677b5ac747b633fbc12e3c9ec22ad0c.scope - libcontainer container 47974150ad7eae27bcf6ef8df0a488e4b677b5ac747b633fbc12e3c9ec22ad0c. Oct 13 05:26:28.282108 systemd[1]: Started cri-containerd-dbabf9099998f44accbc3be24b5614bc0890d40dbd61085b7847ba8b5b52608e.scope - libcontainer container dbabf9099998f44accbc3be24b5614bc0890d40dbd61085b7847ba8b5b52608e. Oct 13 05:26:28.309267 systemd[1]: Started cri-containerd-0db2f75e6b16df41ffe316f031039c3f7a7db97e8291262c6b16cf2a25db61f0.scope - libcontainer container 0db2f75e6b16df41ffe316f031039c3f7a7db97e8291262c6b16cf2a25db61f0. Oct 13 05:26:28.373965 containerd[1614]: time="2025-10-13T05:26:28.373888105Z" level=info msg="StartContainer for \"dbabf9099998f44accbc3be24b5614bc0890d40dbd61085b7847ba8b5b52608e\" returns successfully" Oct 13 05:26:28.377115 containerd[1614]: time="2025-10-13T05:26:28.377053837Z" level=info msg="StartContainer for \"47974150ad7eae27bcf6ef8df0a488e4b677b5ac747b633fbc12e3c9ec22ad0c\" returns successfully" Oct 13 05:26:28.416957 containerd[1614]: time="2025-10-13T05:26:28.416808985Z" level=info msg="StartContainer for \"0db2f75e6b16df41ffe316f031039c3f7a7db97e8291262c6b16cf2a25db61f0\" returns successfully" Oct 13 05:26:28.685729 kubelet[2427]: E1013 05:26:28.685689 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:28.685864 kubelet[2427]: E1013 05:26:28.685851 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:28.689206 kubelet[2427]: E1013 05:26:28.689164 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:28.689464 kubelet[2427]: E1013 05:26:28.689443 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:28.691883 kubelet[2427]: E1013 05:26:28.691849 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:28.692053 kubelet[2427]: E1013 05:26:28.692035 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:29.478889 update_engine[1588]: I20251013 05:26:29.477183 1588 update_attempter.cc:509] Updating boot flags... Oct 13 05:26:29.697621 kubelet[2427]: E1013 05:26:29.697577 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:29.698044 kubelet[2427]: E1013 05:26:29.697768 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:29.699170 kubelet[2427]: E1013 05:26:29.699144 2427 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:29.699313 kubelet[2427]: E1013 05:26:29.699294 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:29.855169 kubelet[2427]: I1013 05:26:29.855089 2427 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:30.133364 kubelet[2427]: E1013 05:26:30.133199 2427 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:26:30.312269 kubelet[2427]: I1013 05:26:30.312227 2427 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:26:30.312269 kubelet[2427]: E1013 05:26:30.312261 2427 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:26:30.312476 kubelet[2427]: E1013 05:26:30.312054 2427 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186df5b4e4086ecf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:26:26.640408271 +0000 UTC m=+1.442800119,LastTimestamp:2025-10-13 05:26:26.640408271 +0000 UTC m=+1.442800119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:26:30.346673 kubelet[2427]: I1013 05:26:30.346612 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:30.351681 kubelet[2427]: E1013 05:26:30.351621 2427 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:30.351681 kubelet[2427]: I1013 05:26:30.351673 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:30.353597 kubelet[2427]: E1013 05:26:30.353571 2427 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:30.353671 kubelet[2427]: I1013 05:26:30.353602 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:30.354760 kubelet[2427]: E1013 05:26:30.354734 2427 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:30.635186 kubelet[2427]: I1013 05:26:30.635115 2427 apiserver.go:52] "Watching apiserver" Oct 13 05:26:30.646564 kubelet[2427]: I1013 05:26:30.646526 2427 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:26:30.697766 kubelet[2427]: I1013 05:26:30.697700 2427 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:30.700531 kubelet[2427]: E1013 05:26:30.700480 2427 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:30.700732 kubelet[2427]: E1013 05:26:30.700705 2427 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:32.716758 systemd[1]: Reload requested from client PID 2730 ('systemctl') (unit session-9.scope)... Oct 13 05:26:32.716774 systemd[1]: Reloading... Oct 13 05:26:32.816971 zram_generator::config[2777]: No configuration found. Oct 13 05:26:33.080760 systemd[1]: Reloading finished in 363 ms. Oct 13 05:26:33.115733 kubelet[2427]: I1013 05:26:33.115659 2427 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:26:33.115904 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:33.140102 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:26:33.140415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:33.140474 systemd[1]: kubelet.service: Consumed 1.605s CPU time, 128.9M memory peak. Oct 13 05:26:33.142601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:33.430877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:33.447231 (kubelet)[2819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:26:33.486530 kubelet[2819]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:26:33.486530 kubelet[2819]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:26:33.486953 kubelet[2819]: I1013 05:26:33.486570 2819 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:26:33.494942 kubelet[2819]: I1013 05:26:33.494035 2819 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:26:33.494942 kubelet[2819]: I1013 05:26:33.494069 2819 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:26:33.494942 kubelet[2819]: I1013 05:26:33.494109 2819 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:26:33.494942 kubelet[2819]: I1013 05:26:33.494124 2819 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:26:33.494942 kubelet[2819]: I1013 05:26:33.494747 2819 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:26:33.496790 kubelet[2819]: I1013 05:26:33.496750 2819 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:26:33.499028 kubelet[2819]: I1013 05:26:33.498883 2819 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:26:33.502870 kubelet[2819]: I1013 05:26:33.502836 2819 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:26:33.507618 kubelet[2819]: I1013 05:26:33.507591 2819 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:26:33.507898 kubelet[2819]: I1013 05:26:33.507861 2819 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:26:33.508072 kubelet[2819]: I1013 05:26:33.507897 2819 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:26:33.508184 kubelet[2819]: I1013 05:26:33.508075 2819 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:26:33.508184 kubelet[2819]: I1013 05:26:33.508083 2819 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:26:33.508184 kubelet[2819]: I1013 05:26:33.508113 2819 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:26:33.508894 kubelet[2819]: I1013 05:26:33.508865 2819 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:33.509139 kubelet[2819]: I1013 05:26:33.509117 2819 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:26:33.509190 kubelet[2819]: I1013 05:26:33.509152 2819 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:26:33.509190 kubelet[2819]: I1013 05:26:33.509188 2819 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:26:33.509263 kubelet[2819]: I1013 05:26:33.509221 2819 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:26:33.510582 kubelet[2819]: I1013 05:26:33.510553 2819 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:26:33.511296 kubelet[2819]: I1013 05:26:33.511270 2819 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:26:33.511383 kubelet[2819]: I1013 05:26:33.511307 2819 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:26:33.515944 kubelet[2819]: I1013 05:26:33.515167 2819 server.go:1262] "Started kubelet" Oct 13 05:26:33.516044 kubelet[2819]: I1013 05:26:33.515989 2819 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:26:33.516101 kubelet[2819]: I1013 05:26:33.516076 2819 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:26:33.516436 kubelet[2819]: I1013 05:26:33.516407 2819 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:26:33.516484 kubelet[2819]: I1013 05:26:33.516471 2819 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:26:33.516731 kubelet[2819]: I1013 05:26:33.516704 2819 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:26:33.522169 kubelet[2819]: I1013 05:26:33.522137 2819 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:26:33.522667 kubelet[2819]: I1013 05:26:33.522647 2819 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:26:33.522847 kubelet[2819]: E1013 05:26:33.522817 2819 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:26:33.522892 kubelet[2819]: I1013 05:26:33.522865 2819 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:26:33.523021 kubelet[2819]: I1013 05:26:33.523000 2819 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:26:33.530663 kubelet[2819]: I1013 05:26:33.530506 2819 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:26:34.354603 kubelet[2819]: I1013 05:26:34.354508 2819 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:26:34.359823 kubelet[2819]: I1013 05:26:34.358046 2819 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:26:34.362590 kubelet[2819]: E1013 05:26:34.362413 2819 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:26:34.364110 kubelet[2819]: I1013 05:26:34.363686 2819 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:26:34.367278 kubelet[2819]: I1013 05:26:34.364599 2819 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:26:34.369985 kubelet[2819]: I1013 05:26:34.369597 2819 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:26:34.369985 kubelet[2819]: I1013 05:26:34.369630 2819 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:26:34.369985 kubelet[2819]: I1013 05:26:34.369665 2819 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:26:34.369985 kubelet[2819]: E1013 05:26:34.369713 2819 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:26:34.440972 kubelet[2819]: I1013 05:26:34.440659 2819 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:26:34.440972 kubelet[2819]: I1013 05:26:34.440745 2819 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:26:34.440972 kubelet[2819]: I1013 05:26:34.440769 2819 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:34.441208 kubelet[2819]: I1013 05:26:34.441001 2819 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:26:34.441208 kubelet[2819]: I1013 05:26:34.441014 2819 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:26:34.441208 kubelet[2819]: I1013 05:26:34.441034 2819 policy_none.go:49] "None policy: Start" Oct 13 05:26:34.441208 kubelet[2819]: I1013 05:26:34.441044 2819 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:26:34.441208 kubelet[2819]: I1013 05:26:34.441079 2819 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:26:34.441609 kubelet[2819]: I1013 05:26:34.441588 2819 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:26:34.441609 kubelet[2819]: I1013 05:26:34.441609 2819 policy_none.go:47] "Start" Oct 13 05:26:34.448203 kubelet[2819]: E1013 05:26:34.448168 2819 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:26:34.448415 kubelet[2819]: I1013 05:26:34.448394 2819 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:26:34.448415 kubelet[2819]: I1013 05:26:34.448411 2819 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:26:34.449750 kubelet[2819]: I1013 05:26:34.449431 2819 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:26:34.450413 kubelet[2819]: E1013 05:26:34.450377 2819 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:26:34.471732 kubelet[2819]: I1013 05:26:34.471694 2819 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:34.471887 kubelet[2819]: I1013 05:26:34.471878 2819 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:34.472190 kubelet[2819]: I1013 05:26:34.472175 2819 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:34.510723 kubelet[2819]: I1013 05:26:34.510672 2819 apiserver.go:52] "Watching apiserver" Oct 13 05:26:34.523449 kubelet[2819]: I1013 05:26:34.523407 2819 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:26:34.526801 kubelet[2819]: I1013 05:26:34.526556 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:34.526801 kubelet[2819]: I1013 05:26:34.526606 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:34.526801 kubelet[2819]: I1013 05:26:34.526632 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:34.526801 kubelet[2819]: I1013 05:26:34.526672 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:34.526801 kubelet[2819]: I1013 05:26:34.526689 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e07c9a29824292844959c9351e9aa855-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07c9a29824292844959c9351e9aa855\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:34.527047 kubelet[2819]: I1013 05:26:34.526708 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e07c9a29824292844959c9351e9aa855-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07c9a29824292844959c9351e9aa855\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:34.527047 kubelet[2819]: I1013 05:26:34.526731 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e07c9a29824292844959c9351e9aa855-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e07c9a29824292844959c9351e9aa855\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:34.527047 kubelet[2819]: I1013 05:26:34.526751 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:34.527047 kubelet[2819]: I1013 05:26:34.526764 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:34.561058 kubelet[2819]: I1013 05:26:34.561024 2819 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:34.569172 kubelet[2819]: I1013 05:26:34.569141 2819 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:26:34.569316 kubelet[2819]: I1013 05:26:34.569228 2819 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:26:34.783057 kubelet[2819]: E1013 05:26:34.782492 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:34.784150 kubelet[2819]: E1013 05:26:34.783660 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:34.786088 kubelet[2819]: E1013 05:26:34.786058 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:34.816834 kubelet[2819]: I1013 05:26:34.816551 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.816520855 podStartE2EDuration="816.520855ms" podCreationTimestamp="2025-10-13 05:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:34.808158192 +0000 UTC m=+1.356750115" watchObservedRunningTime="2025-10-13 05:26:34.816520855 +0000 UTC m=+1.365112769" Oct 13 05:26:34.826100 kubelet[2819]: I1013 05:26:34.825884 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.825862016 podStartE2EDuration="825.862016ms" podCreationTimestamp="2025-10-13 05:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:34.817402069 +0000 UTC m=+1.365993982" watchObservedRunningTime="2025-10-13 05:26:34.825862016 +0000 UTC m=+1.374453929" Oct 13 05:26:34.838507 kubelet[2819]: I1013 05:26:34.838408 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.838388188 podStartE2EDuration="838.388188ms" podCreationTimestamp="2025-10-13 05:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:34.826096949 +0000 UTC m=+1.374688872" watchObservedRunningTime="2025-10-13 05:26:34.838388188 +0000 UTC m=+1.386980101" Oct 13 05:26:35.398992 kubelet[2819]: E1013 05:26:35.397827 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:35.398992 kubelet[2819]: E1013 05:26:35.397906 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:35.398992 kubelet[2819]: E1013 05:26:35.397994 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:36.399569 kubelet[2819]: E1013 05:26:36.399524 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:36.400070 kubelet[2819]: E1013 05:26:36.399756 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:39.192749 kubelet[2819]: I1013 05:26:39.192692 2819 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:26:39.193312 containerd[1614]: time="2025-10-13T05:26:39.193234361Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:26:39.193631 kubelet[2819]: I1013 05:26:39.193469 2819 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:26:40.853786 kubelet[2819]: E1013 05:26:40.853587 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:40.860064 systemd[1]: Created slice kubepods-besteffort-pod155e92f1_2816_4511_ae03_e520c64de7c6.slice - libcontainer container kubepods-besteffort-pod155e92f1_2816_4511_ae03_e520c64de7c6.slice. Oct 13 05:26:40.863480 kubelet[2819]: I1013 05:26:40.863453 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/155e92f1-2816-4511-ae03-e520c64de7c6-xtables-lock\") pod \"kube-proxy-kbghf\" (UID: \"155e92f1-2816-4511-ae03-e520c64de7c6\") " pod="kube-system/kube-proxy-kbghf" Oct 13 05:26:40.863568 kubelet[2819]: I1013 05:26:40.863488 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/155e92f1-2816-4511-ae03-e520c64de7c6-kube-proxy\") pod \"kube-proxy-kbghf\" (UID: \"155e92f1-2816-4511-ae03-e520c64de7c6\") " pod="kube-system/kube-proxy-kbghf" Oct 13 05:26:40.863568 kubelet[2819]: I1013 05:26:40.863510 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/155e92f1-2816-4511-ae03-e520c64de7c6-lib-modules\") pod \"kube-proxy-kbghf\" (UID: \"155e92f1-2816-4511-ae03-e520c64de7c6\") " pod="kube-system/kube-proxy-kbghf" Oct 13 05:26:40.863568 kubelet[2819]: I1013 05:26:40.863523 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62j64\" (UniqueName: \"kubernetes.io/projected/155e92f1-2816-4511-ae03-e520c64de7c6-kube-api-access-62j64\") pod \"kube-proxy-kbghf\" (UID: \"155e92f1-2816-4511-ae03-e520c64de7c6\") " pod="kube-system/kube-proxy-kbghf" Oct 13 05:26:41.209039 systemd[1]: Created slice kubepods-besteffort-pod5ddb62cb_99da_4d56_9bcf_7ab1a3b9d653.slice - libcontainer container kubepods-besteffort-pod5ddb62cb_99da_4d56_9bcf_7ab1a3b9d653.slice. Oct 13 05:26:41.265324 kubelet[2819]: I1013 05:26:41.265252 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ddb62cb-99da-4d56-9bcf-7ab1a3b9d653-var-lib-calico\") pod \"tigera-operator-db78d5bd4-rvffp\" (UID: \"5ddb62cb-99da-4d56-9bcf-7ab1a3b9d653\") " pod="tigera-operator/tigera-operator-db78d5bd4-rvffp" Oct 13 05:26:41.265324 kubelet[2819]: I1013 05:26:41.265324 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tgkf\" (UniqueName: \"kubernetes.io/projected/5ddb62cb-99da-4d56-9bcf-7ab1a3b9d653-kube-api-access-8tgkf\") pod \"tigera-operator-db78d5bd4-rvffp\" (UID: \"5ddb62cb-99da-4d56-9bcf-7ab1a3b9d653\") " pod="tigera-operator/tigera-operator-db78d5bd4-rvffp" Oct 13 05:26:41.408515 kubelet[2819]: E1013 05:26:41.408453 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:41.574017 kubelet[2819]: E1013 05:26:41.573878 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:41.718509 kubelet[2819]: E1013 05:26:41.718462 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:41.719114 containerd[1614]: time="2025-10-13T05:26:41.719076697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbghf,Uid:155e92f1-2816-4511-ae03-e520c64de7c6,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:42.020668 containerd[1614]: time="2025-10-13T05:26:42.020601996Z" level=info msg="connecting to shim 0333fccabf3dc75170fb5b84e12b111e708d617af3c687b8b1a3dc590cac71e3" address="unix:///run/containerd/s/45ed99e676e4a3aa32e9b8c88dfad86f73f4ce451670060d1a3c0184b2e1529f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:42.054097 systemd[1]: Started cri-containerd-0333fccabf3dc75170fb5b84e12b111e708d617af3c687b8b1a3dc590cac71e3.scope - libcontainer container 0333fccabf3dc75170fb5b84e12b111e708d617af3c687b8b1a3dc590cac71e3. Oct 13 05:26:42.325606 containerd[1614]: time="2025-10-13T05:26:42.324888858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbghf,Uid:155e92f1-2816-4511-ae03-e520c64de7c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0333fccabf3dc75170fb5b84e12b111e708d617af3c687b8b1a3dc590cac71e3\"" Oct 13 05:26:42.326132 kubelet[2819]: E1013 05:26:42.326092 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:42.328819 containerd[1614]: time="2025-10-13T05:26:42.328735230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-rvffp,Uid:5ddb62cb-99da-4d56-9bcf-7ab1a3b9d653,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:26:42.338731 containerd[1614]: time="2025-10-13T05:26:42.338653905Z" level=info msg="CreateContainer within sandbox \"0333fccabf3dc75170fb5b84e12b111e708d617af3c687b8b1a3dc590cac71e3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:26:42.371959 containerd[1614]: time="2025-10-13T05:26:42.371848254Z" level=info msg="Container fa4233a7087b1bb6fd344d60be1c4be366bfa5465e2f65b2a935f1c9a7c783fa: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:42.373515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2700113754.mount: Deactivated successfully. Oct 13 05:26:42.373863 containerd[1614]: time="2025-10-13T05:26:42.373645138Z" level=info msg="connecting to shim ab722b0a2b85aed8d790041e3964baa4408f8ebdadfd635ae1eb8617dfc58a83" address="unix:///run/containerd/s/7bee44147cae5bb27a117c2e9296f646cde9701816a839abf409970e2447a416" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:42.394562 containerd[1614]: time="2025-10-13T05:26:42.394510113Z" level=info msg="CreateContainer within sandbox \"0333fccabf3dc75170fb5b84e12b111e708d617af3c687b8b1a3dc590cac71e3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fa4233a7087b1bb6fd344d60be1c4be366bfa5465e2f65b2a935f1c9a7c783fa\"" Oct 13 05:26:42.396965 containerd[1614]: time="2025-10-13T05:26:42.395875794Z" level=info msg="StartContainer for \"fa4233a7087b1bb6fd344d60be1c4be366bfa5465e2f65b2a935f1c9a7c783fa\"" Oct 13 05:26:42.397837 containerd[1614]: time="2025-10-13T05:26:42.397781462Z" level=info msg="connecting to shim fa4233a7087b1bb6fd344d60be1c4be366bfa5465e2f65b2a935f1c9a7c783fa" address="unix:///run/containerd/s/45ed99e676e4a3aa32e9b8c88dfad86f73f4ce451670060d1a3c0184b2e1529f" protocol=ttrpc version=3 Oct 13 05:26:42.412315 systemd[1]: Started cri-containerd-ab722b0a2b85aed8d790041e3964baa4408f8ebdadfd635ae1eb8617dfc58a83.scope - libcontainer container ab722b0a2b85aed8d790041e3964baa4408f8ebdadfd635ae1eb8617dfc58a83. Oct 13 05:26:42.414845 kubelet[2819]: E1013 05:26:42.414810 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:42.420222 systemd[1]: Started cri-containerd-fa4233a7087b1bb6fd344d60be1c4be366bfa5465e2f65b2a935f1c9a7c783fa.scope - libcontainer container fa4233a7087b1bb6fd344d60be1c4be366bfa5465e2f65b2a935f1c9a7c783fa. Oct 13 05:26:42.516491 containerd[1614]: time="2025-10-13T05:26:42.516433965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-rvffp,Uid:5ddb62cb-99da-4d56-9bcf-7ab1a3b9d653,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ab722b0a2b85aed8d790041e3964baa4408f8ebdadfd635ae1eb8617dfc58a83\"" Oct 13 05:26:42.517836 containerd[1614]: time="2025-10-13T05:26:42.517814454Z" level=info msg="StartContainer for \"fa4233a7087b1bb6fd344d60be1c4be366bfa5465e2f65b2a935f1c9a7c783fa\" returns successfully" Oct 13 05:26:42.519906 containerd[1614]: time="2025-10-13T05:26:42.519877519Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:26:43.416725 kubelet[2819]: E1013 05:26:43.416649 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:43.479352 kubelet[2819]: I1013 05:26:43.479237 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbghf" podStartSLOduration=3.479199111 podStartE2EDuration="3.479199111s" podCreationTimestamp="2025-10-13 05:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:43.479132836 +0000 UTC m=+10.027724749" watchObservedRunningTime="2025-10-13 05:26:43.479199111 +0000 UTC m=+10.027791024" Oct 13 05:26:44.249902 kubelet[2819]: E1013 05:26:44.249846 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:44.419866 kubelet[2819]: E1013 05:26:44.419822 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:44.420309 kubelet[2819]: E1013 05:26:44.419983 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:45.660665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4145774305.mount: Deactivated successfully. Oct 13 05:26:48.244032 containerd[1614]: time="2025-10-13T05:26:48.243962781Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:48.305753 containerd[1614]: time="2025-10-13T05:26:48.305687233Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:26:48.340815 containerd[1614]: time="2025-10-13T05:26:48.340775814Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:48.378864 containerd[1614]: time="2025-10-13T05:26:48.378803100Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:48.379392 containerd[1614]: time="2025-10-13T05:26:48.379346984Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.859437234s" Oct 13 05:26:48.379392 containerd[1614]: time="2025-10-13T05:26:48.379381008Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:26:48.437783 containerd[1614]: time="2025-10-13T05:26:48.437728199Z" level=info msg="CreateContainer within sandbox \"ab722b0a2b85aed8d790041e3964baa4408f8ebdadfd635ae1eb8617dfc58a83\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:26:49.041467 containerd[1614]: time="2025-10-13T05:26:49.041380593Z" level=info msg="Container 2fd77f10a8bcf275afa9721df9124c56a23966cd103329b336222d769ee64cb7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:49.107357 containerd[1614]: time="2025-10-13T05:26:49.107284769Z" level=info msg="CreateContainer within sandbox \"ab722b0a2b85aed8d790041e3964baa4408f8ebdadfd635ae1eb8617dfc58a83\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2fd77f10a8bcf275afa9721df9124c56a23966cd103329b336222d769ee64cb7\"" Oct 13 05:26:49.108220 containerd[1614]: time="2025-10-13T05:26:49.108107946Z" level=info msg="StartContainer for \"2fd77f10a8bcf275afa9721df9124c56a23966cd103329b336222d769ee64cb7\"" Oct 13 05:26:49.109541 containerd[1614]: time="2025-10-13T05:26:49.109463605Z" level=info msg="connecting to shim 2fd77f10a8bcf275afa9721df9124c56a23966cd103329b336222d769ee64cb7" address="unix:///run/containerd/s/7bee44147cae5bb27a117c2e9296f646cde9701816a839abf409970e2447a416" protocol=ttrpc version=3 Oct 13 05:26:49.179136 systemd[1]: Started cri-containerd-2fd77f10a8bcf275afa9721df9124c56a23966cd103329b336222d769ee64cb7.scope - libcontainer container 2fd77f10a8bcf275afa9721df9124c56a23966cd103329b336222d769ee64cb7. Oct 13 05:26:49.226194 containerd[1614]: time="2025-10-13T05:26:49.226096168Z" level=info msg="StartContainer for \"2fd77f10a8bcf275afa9721df9124c56a23966cd103329b336222d769ee64cb7\" returns successfully" Oct 13 05:26:49.445984 kubelet[2819]: I1013 05:26:49.445864 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-rvffp" podStartSLOduration=3.5734276339999997 podStartE2EDuration="9.445816566s" podCreationTimestamp="2025-10-13 05:26:40 +0000 UTC" firstStartedPulling="2025-10-13 05:26:42.518887535 +0000 UTC m=+9.067479439" lastFinishedPulling="2025-10-13 05:26:48.391276458 +0000 UTC m=+14.939868371" observedRunningTime="2025-10-13 05:26:49.445491674 +0000 UTC m=+15.994083617" watchObservedRunningTime="2025-10-13 05:26:49.445816566 +0000 UTC m=+15.994408509" Oct 13 05:26:57.835712 sudo[1843]: pam_unix(sudo:session): session closed for user root Oct 13 05:26:57.844960 sshd[1842]: Connection closed by 10.0.0.1 port 47724 Oct 13 05:26:57.845020 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Oct 13 05:26:57.852011 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:47724.service: Deactivated successfully. Oct 13 05:26:57.854595 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:26:57.854910 systemd[1]: session-9.scope: Consumed 7.866s CPU time, 227M memory peak. Oct 13 05:26:57.856459 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:26:57.858326 systemd-logind[1585]: Removed session 9. Oct 13 05:27:01.478694 systemd[1]: Created slice kubepods-besteffort-pod8c938a2b_5a3b_4a41_8422_8fb79b62ae04.slice - libcontainer container kubepods-besteffort-pod8c938a2b_5a3b_4a41_8422_8fb79b62ae04.slice. Oct 13 05:27:01.483908 kubelet[2819]: I1013 05:27:01.483846 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7th\" (UniqueName: \"kubernetes.io/projected/8c938a2b-5a3b-4a41-8422-8fb79b62ae04-kube-api-access-mg7th\") pod \"calico-typha-9fc4f485d-2vzqj\" (UID: \"8c938a2b-5a3b-4a41-8422-8fb79b62ae04\") " pod="calico-system/calico-typha-9fc4f485d-2vzqj" Oct 13 05:27:01.484299 kubelet[2819]: I1013 05:27:01.483947 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c938a2b-5a3b-4a41-8422-8fb79b62ae04-typha-certs\") pod \"calico-typha-9fc4f485d-2vzqj\" (UID: \"8c938a2b-5a3b-4a41-8422-8fb79b62ae04\") " pod="calico-system/calico-typha-9fc4f485d-2vzqj" Oct 13 05:27:01.484299 kubelet[2819]: I1013 05:27:01.483990 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c938a2b-5a3b-4a41-8422-8fb79b62ae04-tigera-ca-bundle\") pod \"calico-typha-9fc4f485d-2vzqj\" (UID: \"8c938a2b-5a3b-4a41-8422-8fb79b62ae04\") " pod="calico-system/calico-typha-9fc4f485d-2vzqj" Oct 13 05:27:01.786105 kubelet[2819]: E1013 05:27:01.785764 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:01.787069 containerd[1614]: time="2025-10-13T05:27:01.786963150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9fc4f485d-2vzqj,Uid:8c938a2b-5a3b-4a41-8422-8fb79b62ae04,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:01.852950 systemd[1]: Created slice kubepods-besteffort-pod1fcb5910_06f1_4b7b_8c5c_ef4f2987ee09.slice - libcontainer container kubepods-besteffort-pod1fcb5910_06f1_4b7b_8c5c_ef4f2987ee09.slice. Oct 13 05:27:01.854691 containerd[1614]: time="2025-10-13T05:27:01.854214922Z" level=info msg="connecting to shim 16e4c63a9df80c203620dd18706467ccb9db9277130addb6749710dd71bc023f" address="unix:///run/containerd/s/0452fa65471fa7a73a7ce5db5118cee49b10b0497172dbf9dcbde21f04d21f5a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:01.884079 systemd[1]: Started cri-containerd-16e4c63a9df80c203620dd18706467ccb9db9277130addb6749710dd71bc023f.scope - libcontainer container 16e4c63a9df80c203620dd18706467ccb9db9277130addb6749710dd71bc023f. Oct 13 05:27:01.886270 kubelet[2819]: I1013 05:27:01.886217 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-cni-bin-dir\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886355 kubelet[2819]: I1013 05:27:01.886277 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-xtables-lock\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886355 kubelet[2819]: I1013 05:27:01.886304 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-flexvol-driver-host\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886355 kubelet[2819]: I1013 05:27:01.886326 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-policysync\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886355 kubelet[2819]: I1013 05:27:01.886345 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-cni-net-dir\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886462 kubelet[2819]: I1013 05:27:01.886385 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-node-certs\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886462 kubelet[2819]: I1013 05:27:01.886413 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-tigera-ca-bundle\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886462 kubelet[2819]: I1013 05:27:01.886440 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-var-lib-calico\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886751 kubelet[2819]: I1013 05:27:01.886469 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmf9k\" (UniqueName: \"kubernetes.io/projected/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-kube-api-access-wmf9k\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886751 kubelet[2819]: I1013 05:27:01.886495 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-cni-log-dir\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886751 kubelet[2819]: I1013 05:27:01.886520 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-var-run-calico\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.886751 kubelet[2819]: I1013 05:27:01.886557 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09-lib-modules\") pod \"calico-node-22mlb\" (UID: \"1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09\") " pod="calico-system/calico-node-22mlb" Oct 13 05:27:01.936992 containerd[1614]: time="2025-10-13T05:27:01.936910509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9fc4f485d-2vzqj,Uid:8c938a2b-5a3b-4a41-8422-8fb79b62ae04,Namespace:calico-system,Attempt:0,} returns sandbox id \"16e4c63a9df80c203620dd18706467ccb9db9277130addb6749710dd71bc023f\"" Oct 13 05:27:01.937779 kubelet[2819]: E1013 05:27:01.937744 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:01.938614 containerd[1614]: time="2025-10-13T05:27:01.938565176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:27:01.992890 kubelet[2819]: E1013 05:27:01.992851 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:01.992890 kubelet[2819]: W1013 05:27:01.992881 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:01.993075 kubelet[2819]: E1013 05:27:01.992954 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:01.999619 kubelet[2819]: E1013 05:27:01.999577 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:01.999619 kubelet[2819]: W1013 05:27:01.999602 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:01.999619 kubelet[2819]: E1013 05:27:01.999627 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.145977 kubelet[2819]: E1013 05:27:02.145250 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:02.163007 containerd[1614]: time="2025-10-13T05:27:02.162944213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-22mlb,Uid:1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:02.186633 kubelet[2819]: E1013 05:27:02.186580 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.186633 kubelet[2819]: W1013 05:27:02.186605 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.186633 kubelet[2819]: E1013 05:27:02.186627 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.186951 kubelet[2819]: E1013 05:27:02.186900 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.186951 kubelet[2819]: W1013 05:27:02.186914 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.186951 kubelet[2819]: E1013 05:27:02.186945 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.187288 kubelet[2819]: E1013 05:27:02.187265 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.187288 kubelet[2819]: W1013 05:27:02.187276 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.187288 kubelet[2819]: E1013 05:27:02.187285 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.187591 kubelet[2819]: E1013 05:27:02.187570 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.187591 kubelet[2819]: W1013 05:27:02.187581 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.187591 kubelet[2819]: E1013 05:27:02.187590 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.187791 kubelet[2819]: E1013 05:27:02.187773 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.187791 kubelet[2819]: W1013 05:27:02.187785 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.187845 kubelet[2819]: E1013 05:27:02.187792 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.187979 kubelet[2819]: E1013 05:27:02.187966 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.187979 kubelet[2819]: W1013 05:27:02.187976 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.188031 kubelet[2819]: E1013 05:27:02.187984 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.188176 kubelet[2819]: E1013 05:27:02.188164 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.188176 kubelet[2819]: W1013 05:27:02.188173 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.188229 kubelet[2819]: E1013 05:27:02.188182 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.188416 kubelet[2819]: E1013 05:27:02.188403 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.188416 kubelet[2819]: W1013 05:27:02.188413 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.188464 kubelet[2819]: E1013 05:27:02.188420 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.188635 kubelet[2819]: E1013 05:27:02.188623 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.188635 kubelet[2819]: W1013 05:27:02.188632 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.188690 kubelet[2819]: E1013 05:27:02.188640 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.188799 kubelet[2819]: E1013 05:27:02.188786 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.188799 kubelet[2819]: W1013 05:27:02.188795 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.188850 kubelet[2819]: E1013 05:27:02.188802 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.188993 kubelet[2819]: E1013 05:27:02.188979 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.188993 kubelet[2819]: W1013 05:27:02.188990 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.189044 kubelet[2819]: E1013 05:27:02.188998 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.189163 kubelet[2819]: E1013 05:27:02.189151 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.189163 kubelet[2819]: W1013 05:27:02.189160 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.189207 kubelet[2819]: E1013 05:27:02.189168 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.189347 kubelet[2819]: E1013 05:27:02.189334 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.189347 kubelet[2819]: W1013 05:27:02.189343 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.189409 kubelet[2819]: E1013 05:27:02.189351 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.189516 kubelet[2819]: E1013 05:27:02.189503 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.189516 kubelet[2819]: W1013 05:27:02.189514 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.189559 kubelet[2819]: E1013 05:27:02.189522 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.189684 kubelet[2819]: E1013 05:27:02.189668 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.189684 kubelet[2819]: W1013 05:27:02.189676 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.189684 kubelet[2819]: E1013 05:27:02.189683 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.189845 kubelet[2819]: E1013 05:27:02.189832 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.189845 kubelet[2819]: W1013 05:27:02.189841 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.189890 kubelet[2819]: E1013 05:27:02.189848 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.190044 kubelet[2819]: E1013 05:27:02.190029 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.190044 kubelet[2819]: W1013 05:27:02.190038 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.190106 kubelet[2819]: E1013 05:27:02.190045 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.190201 kubelet[2819]: E1013 05:27:02.190189 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.190201 kubelet[2819]: W1013 05:27:02.190198 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.190243 kubelet[2819]: E1013 05:27:02.190205 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.190373 kubelet[2819]: E1013 05:27:02.190362 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.190373 kubelet[2819]: W1013 05:27:02.190371 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.190419 kubelet[2819]: E1013 05:27:02.190379 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.190550 kubelet[2819]: E1013 05:27:02.190528 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.190550 kubelet[2819]: W1013 05:27:02.190536 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.190550 kubelet[2819]: E1013 05:27:02.190543 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.190827 kubelet[2819]: E1013 05:27:02.190795 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.190827 kubelet[2819]: W1013 05:27:02.190806 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.190827 kubelet[2819]: E1013 05:27:02.190814 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.191026 kubelet[2819]: I1013 05:27:02.190837 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/762f9e88-d9dd-4f94-bef9-b3a498513c70-registration-dir\") pod \"csi-node-driver-gggtg\" (UID: \"762f9e88-d9dd-4f94-bef9-b3a498513c70\") " pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:02.191073 kubelet[2819]: E1013 05:27:02.191059 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.191073 kubelet[2819]: W1013 05:27:02.191068 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.191115 kubelet[2819]: E1013 05:27:02.191076 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.191115 kubelet[2819]: I1013 05:27:02.191093 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/762f9e88-d9dd-4f94-bef9-b3a498513c70-varrun\") pod \"csi-node-driver-gggtg\" (UID: \"762f9e88-d9dd-4f94-bef9-b3a498513c70\") " pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:02.191460 kubelet[2819]: E1013 05:27:02.191425 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.191519 kubelet[2819]: W1013 05:27:02.191451 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.191519 kubelet[2819]: E1013 05:27:02.191494 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.191753 kubelet[2819]: E1013 05:27:02.191737 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.191753 kubelet[2819]: W1013 05:27:02.191748 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.191822 kubelet[2819]: E1013 05:27:02.191760 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.192044 kubelet[2819]: E1013 05:27:02.192027 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.192044 kubelet[2819]: W1013 05:27:02.192039 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.192095 kubelet[2819]: E1013 05:27:02.192049 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.192131 kubelet[2819]: I1013 05:27:02.192111 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2s9c\" (UniqueName: \"kubernetes.io/projected/762f9e88-d9dd-4f94-bef9-b3a498513c70-kube-api-access-q2s9c\") pod \"csi-node-driver-gggtg\" (UID: \"762f9e88-d9dd-4f94-bef9-b3a498513c70\") " pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:02.192273 kubelet[2819]: E1013 05:27:02.192249 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.192273 kubelet[2819]: W1013 05:27:02.192270 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.192323 kubelet[2819]: E1013 05:27:02.192279 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.192461 kubelet[2819]: E1013 05:27:02.192447 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.192461 kubelet[2819]: W1013 05:27:02.192458 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.192522 kubelet[2819]: E1013 05:27:02.192469 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.192690 kubelet[2819]: E1013 05:27:02.192675 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.192690 kubelet[2819]: W1013 05:27:02.192687 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.192752 kubelet[2819]: E1013 05:27:02.192697 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.192752 kubelet[2819]: I1013 05:27:02.192718 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/762f9e88-d9dd-4f94-bef9-b3a498513c70-kubelet-dir\") pod \"csi-node-driver-gggtg\" (UID: \"762f9e88-d9dd-4f94-bef9-b3a498513c70\") " pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:02.192982 kubelet[2819]: E1013 05:27:02.192961 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.192982 kubelet[2819]: W1013 05:27:02.192977 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.193058 kubelet[2819]: E1013 05:27:02.192987 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.193328 kubelet[2819]: E1013 05:27:02.193311 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.193328 kubelet[2819]: W1013 05:27:02.193324 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.193401 kubelet[2819]: E1013 05:27:02.193333 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.193655 kubelet[2819]: E1013 05:27:02.193634 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.193655 kubelet[2819]: W1013 05:27:02.193647 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.193655 kubelet[2819]: E1013 05:27:02.193658 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.193763 kubelet[2819]: I1013 05:27:02.193693 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/762f9e88-d9dd-4f94-bef9-b3a498513c70-socket-dir\") pod \"csi-node-driver-gggtg\" (UID: \"762f9e88-d9dd-4f94-bef9-b3a498513c70\") " pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:02.194089 kubelet[2819]: E1013 05:27:02.194067 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.194089 kubelet[2819]: W1013 05:27:02.194086 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.194169 kubelet[2819]: E1013 05:27:02.194099 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.194485 kubelet[2819]: E1013 05:27:02.194437 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.194485 kubelet[2819]: W1013 05:27:02.194450 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.194485 kubelet[2819]: E1013 05:27:02.194482 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.194751 kubelet[2819]: E1013 05:27:02.194731 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.194751 kubelet[2819]: W1013 05:27:02.194743 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.194751 kubelet[2819]: E1013 05:27:02.194751 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.195098 kubelet[2819]: E1013 05:27:02.195067 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.195098 kubelet[2819]: W1013 05:27:02.195092 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.195161 kubelet[2819]: E1013 05:27:02.195105 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.199406 containerd[1614]: time="2025-10-13T05:27:02.199349870Z" level=info msg="connecting to shim 1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85" address="unix:///run/containerd/s/b801a82be18108bd86f8f3ca93ae1e31d3b6ff01cc088da493c70d0fdf5cf901" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:02.252176 systemd[1]: Started cri-containerd-1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85.scope - libcontainer container 1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85. Oct 13 05:27:02.296198 kubelet[2819]: E1013 05:27:02.296139 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.296198 kubelet[2819]: W1013 05:27:02.296172 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.296505 kubelet[2819]: E1013 05:27:02.296205 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.296812 kubelet[2819]: E1013 05:27:02.296768 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.296812 kubelet[2819]: W1013 05:27:02.296795 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.296898 kubelet[2819]: E1013 05:27:02.296816 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.297331 kubelet[2819]: E1013 05:27:02.297306 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.297377 kubelet[2819]: W1013 05:27:02.297331 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.297377 kubelet[2819]: E1013 05:27:02.297356 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.297843 kubelet[2819]: E1013 05:27:02.297813 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.297843 kubelet[2819]: W1013 05:27:02.297835 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.297950 kubelet[2819]: E1013 05:27:02.297857 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.298536 kubelet[2819]: E1013 05:27:02.298512 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.298603 kubelet[2819]: W1013 05:27:02.298579 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.298628 kubelet[2819]: E1013 05:27:02.298609 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.299272 kubelet[2819]: E1013 05:27:02.299234 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.299333 kubelet[2819]: W1013 05:27:02.299270 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.299333 kubelet[2819]: E1013 05:27:02.299294 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.300062 kubelet[2819]: E1013 05:27:02.299992 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.300114 kubelet[2819]: W1013 05:27:02.300018 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.300114 kubelet[2819]: E1013 05:27:02.300089 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.300980 kubelet[2819]: E1013 05:27:02.300954 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.301048 kubelet[2819]: W1013 05:27:02.300978 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.301048 kubelet[2819]: E1013 05:27:02.301001 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.301574 kubelet[2819]: E1013 05:27:02.301536 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.301657 kubelet[2819]: W1013 05:27:02.301566 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.301657 kubelet[2819]: E1013 05:27:02.301631 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.302249 kubelet[2819]: E1013 05:27:02.302221 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.302339 kubelet[2819]: W1013 05:27:02.302295 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.302339 kubelet[2819]: E1013 05:27:02.302321 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.302929 kubelet[2819]: E1013 05:27:02.302883 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.302990 kubelet[2819]: W1013 05:27:02.302907 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.302990 kubelet[2819]: E1013 05:27:02.302957 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.303443 kubelet[2819]: E1013 05:27:02.303380 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.303443 kubelet[2819]: W1013 05:27:02.303403 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.303443 kubelet[2819]: E1013 05:27:02.303426 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.304111 kubelet[2819]: E1013 05:27:02.304086 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.304177 kubelet[2819]: W1013 05:27:02.304109 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.304177 kubelet[2819]: E1013 05:27:02.304131 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.304960 kubelet[2819]: E1013 05:27:02.304573 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.304960 kubelet[2819]: W1013 05:27:02.304599 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.304960 kubelet[2819]: E1013 05:27:02.304668 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.305339 kubelet[2819]: E1013 05:27:02.305306 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.305387 kubelet[2819]: W1013 05:27:02.305336 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.305387 kubelet[2819]: E1013 05:27:02.305358 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.306067 kubelet[2819]: E1013 05:27:02.305891 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.306067 kubelet[2819]: W1013 05:27:02.305962 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.306067 kubelet[2819]: E1013 05:27:02.305983 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.306393 kubelet[2819]: E1013 05:27:02.306366 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.306457 kubelet[2819]: W1013 05:27:02.306432 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.306483 kubelet[2819]: E1013 05:27:02.306463 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.307329 kubelet[2819]: E1013 05:27:02.307286 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.307421 kubelet[2819]: W1013 05:27:02.307328 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.307421 kubelet[2819]: E1013 05:27:02.307353 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.308518 kubelet[2819]: E1013 05:27:02.308486 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.308780 kubelet[2819]: W1013 05:27:02.308513 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.308845 kubelet[2819]: E1013 05:27:02.308783 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.309317 kubelet[2819]: E1013 05:27:02.309260 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.309371 kubelet[2819]: W1013 05:27:02.309346 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.309418 kubelet[2819]: E1013 05:27:02.309371 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.309505 containerd[1614]: time="2025-10-13T05:27:02.309471431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-22mlb,Uid:1fcb5910-06f1-4b7b-8c5c-ef4f2987ee09,Namespace:calico-system,Attempt:0,} returns sandbox id \"1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85\"" Oct 13 05:27:02.310635 kubelet[2819]: E1013 05:27:02.310602 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.310635 kubelet[2819]: W1013 05:27:02.310632 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.310710 kubelet[2819]: E1013 05:27:02.310655 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.311030 kubelet[2819]: E1013 05:27:02.310994 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.311030 kubelet[2819]: W1013 05:27:02.311024 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.311120 kubelet[2819]: E1013 05:27:02.311045 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.311437 kubelet[2819]: E1013 05:27:02.311400 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.311437 kubelet[2819]: W1013 05:27:02.311420 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.311518 kubelet[2819]: E1013 05:27:02.311491 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.312232 kubelet[2819]: E1013 05:27:02.312196 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.312462 kubelet[2819]: W1013 05:27:02.312392 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.312462 kubelet[2819]: E1013 05:27:02.312406 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.313436 kubelet[2819]: E1013 05:27:02.313389 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.313436 kubelet[2819]: W1013 05:27:02.313406 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.313436 kubelet[2819]: E1013 05:27:02.313417 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:02.328582 kubelet[2819]: E1013 05:27:02.328515 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:02.328582 kubelet[2819]: W1013 05:27:02.328556 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:02.328775 kubelet[2819]: E1013 05:27:02.328592 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:03.372114 kubelet[2819]: E1013 05:27:03.370883 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:05.370710 kubelet[2819]: E1013 05:27:05.370621 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:05.904289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146771330.mount: Deactivated successfully. Oct 13 05:27:07.370969 kubelet[2819]: E1013 05:27:07.370441 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:07.608760 containerd[1614]: time="2025-10-13T05:27:07.608683255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:07.610810 containerd[1614]: time="2025-10-13T05:27:07.610735087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:27:07.612339 containerd[1614]: time="2025-10-13T05:27:07.612261573Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:07.614552 containerd[1614]: time="2025-10-13T05:27:07.614499454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:07.615228 containerd[1614]: time="2025-10-13T05:27:07.615177287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.676555244s" Oct 13 05:27:07.615228 containerd[1614]: time="2025-10-13T05:27:07.615215278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:27:07.617048 containerd[1614]: time="2025-10-13T05:27:07.616982015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:27:07.631488 containerd[1614]: time="2025-10-13T05:27:07.631357033Z" level=info msg="CreateContainer within sandbox \"16e4c63a9df80c203620dd18706467ccb9db9277130addb6749710dd71bc023f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:27:07.645489 containerd[1614]: time="2025-10-13T05:27:07.645272759Z" level=info msg="Container 9dd369bdc63a72328065cac9aa8a6e32517c7012dbe4c8d95c04b5a99fd59379: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:07.667659 containerd[1614]: time="2025-10-13T05:27:07.667590809Z" level=info msg="CreateContainer within sandbox \"16e4c63a9df80c203620dd18706467ccb9db9277130addb6749710dd71bc023f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9dd369bdc63a72328065cac9aa8a6e32517c7012dbe4c8d95c04b5a99fd59379\"" Oct 13 05:27:07.670941 containerd[1614]: time="2025-10-13T05:27:07.670495202Z" level=info msg="StartContainer for \"9dd369bdc63a72328065cac9aa8a6e32517c7012dbe4c8d95c04b5a99fd59379\"" Oct 13 05:27:07.672216 containerd[1614]: time="2025-10-13T05:27:07.672151752Z" level=info msg="connecting to shim 9dd369bdc63a72328065cac9aa8a6e32517c7012dbe4c8d95c04b5a99fd59379" address="unix:///run/containerd/s/0452fa65471fa7a73a7ce5db5118cee49b10b0497172dbf9dcbde21f04d21f5a" protocol=ttrpc version=3 Oct 13 05:27:07.702154 systemd[1]: Started cri-containerd-9dd369bdc63a72328065cac9aa8a6e32517c7012dbe4c8d95c04b5a99fd59379.scope - libcontainer container 9dd369bdc63a72328065cac9aa8a6e32517c7012dbe4c8d95c04b5a99fd59379. Oct 13 05:27:07.818290 containerd[1614]: time="2025-10-13T05:27:07.818238517Z" level=info msg="StartContainer for \"9dd369bdc63a72328065cac9aa8a6e32517c7012dbe4c8d95c04b5a99fd59379\" returns successfully" Oct 13 05:27:08.484675 kubelet[2819]: E1013 05:27:08.484635 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:08.532441 kubelet[2819]: E1013 05:27:08.532380 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.532441 kubelet[2819]: W1013 05:27:08.532414 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.532441 kubelet[2819]: E1013 05:27:08.532441 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.532786 kubelet[2819]: E1013 05:27:08.532727 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.532786 kubelet[2819]: W1013 05:27:08.532751 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.532786 kubelet[2819]: E1013 05:27:08.532765 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.532986 kubelet[2819]: E1013 05:27:08.532967 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.532986 kubelet[2819]: W1013 05:27:08.532983 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.533050 kubelet[2819]: E1013 05:27:08.532996 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.533252 kubelet[2819]: E1013 05:27:08.533225 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.533252 kubelet[2819]: W1013 05:27:08.533241 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.533252 kubelet[2819]: E1013 05:27:08.533252 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.533505 kubelet[2819]: E1013 05:27:08.533486 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.533505 kubelet[2819]: W1013 05:27:08.533500 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.533570 kubelet[2819]: E1013 05:27:08.533512 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.533716 kubelet[2819]: E1013 05:27:08.533691 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.533716 kubelet[2819]: W1013 05:27:08.533706 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.533780 kubelet[2819]: E1013 05:27:08.533719 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.533968 kubelet[2819]: E1013 05:27:08.533948 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.533968 kubelet[2819]: W1013 05:27:08.533964 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.534035 kubelet[2819]: E1013 05:27:08.533975 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.534177 kubelet[2819]: E1013 05:27:08.534158 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.534177 kubelet[2819]: W1013 05:27:08.534172 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.534233 kubelet[2819]: E1013 05:27:08.534184 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.534370 kubelet[2819]: E1013 05:27:08.534353 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.534370 kubelet[2819]: W1013 05:27:08.534366 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.534436 kubelet[2819]: E1013 05:27:08.534376 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.534580 kubelet[2819]: E1013 05:27:08.534563 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.534580 kubelet[2819]: W1013 05:27:08.534577 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.534639 kubelet[2819]: E1013 05:27:08.534589 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.534786 kubelet[2819]: E1013 05:27:08.534764 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.534786 kubelet[2819]: W1013 05:27:08.534779 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.534857 kubelet[2819]: E1013 05:27:08.534789 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.535009 kubelet[2819]: E1013 05:27:08.534990 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.535009 kubelet[2819]: W1013 05:27:08.535004 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.535082 kubelet[2819]: E1013 05:27:08.535015 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.535224 kubelet[2819]: E1013 05:27:08.535202 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.535224 kubelet[2819]: W1013 05:27:08.535216 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.535224 kubelet[2819]: E1013 05:27:08.535225 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.535437 kubelet[2819]: E1013 05:27:08.535415 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.535437 kubelet[2819]: W1013 05:27:08.535430 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.535437 kubelet[2819]: E1013 05:27:08.535440 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.535647 kubelet[2819]: E1013 05:27:08.535628 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.535647 kubelet[2819]: W1013 05:27:08.535642 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.535706 kubelet[2819]: E1013 05:27:08.535652 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.538971 kubelet[2819]: E1013 05:27:08.538936 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.538971 kubelet[2819]: W1013 05:27:08.538955 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.538971 kubelet[2819]: E1013 05:27:08.538967 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.539308 kubelet[2819]: E1013 05:27:08.539263 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.539308 kubelet[2819]: W1013 05:27:08.539298 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.539370 kubelet[2819]: E1013 05:27:08.539343 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.539624 kubelet[2819]: E1013 05:27:08.539597 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.539624 kubelet[2819]: W1013 05:27:08.539615 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.539624 kubelet[2819]: E1013 05:27:08.539629 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.539857 kubelet[2819]: E1013 05:27:08.539836 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.539857 kubelet[2819]: W1013 05:27:08.539850 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.539971 kubelet[2819]: E1013 05:27:08.539860 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.540069 kubelet[2819]: E1013 05:27:08.540047 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.540069 kubelet[2819]: W1013 05:27:08.540062 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.540157 kubelet[2819]: E1013 05:27:08.540071 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.540303 kubelet[2819]: E1013 05:27:08.540282 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.540303 kubelet[2819]: W1013 05:27:08.540295 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.540303 kubelet[2819]: E1013 05:27:08.540305 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.540598 kubelet[2819]: E1013 05:27:08.540575 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.540598 kubelet[2819]: W1013 05:27:08.540592 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.540678 kubelet[2819]: E1013 05:27:08.540604 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.540819 kubelet[2819]: E1013 05:27:08.540799 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.540819 kubelet[2819]: W1013 05:27:08.540814 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.540882 kubelet[2819]: E1013 05:27:08.540825 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.541044 kubelet[2819]: E1013 05:27:08.541022 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.541044 kubelet[2819]: W1013 05:27:08.541037 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.541148 kubelet[2819]: E1013 05:27:08.541047 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.541251 kubelet[2819]: E1013 05:27:08.541229 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.541251 kubelet[2819]: W1013 05:27:08.541243 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.541251 kubelet[2819]: E1013 05:27:08.541253 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.541496 kubelet[2819]: E1013 05:27:08.541474 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.541496 kubelet[2819]: W1013 05:27:08.541487 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.541565 kubelet[2819]: E1013 05:27:08.541499 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.541777 kubelet[2819]: E1013 05:27:08.541756 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.541777 kubelet[2819]: W1013 05:27:08.541772 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.541850 kubelet[2819]: E1013 05:27:08.541783 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.542021 kubelet[2819]: E1013 05:27:08.541999 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.542021 kubelet[2819]: W1013 05:27:08.542015 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.542082 kubelet[2819]: E1013 05:27:08.542028 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.542239 kubelet[2819]: E1013 05:27:08.542221 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.542239 kubelet[2819]: W1013 05:27:08.542235 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.542295 kubelet[2819]: E1013 05:27:08.542245 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.542454 kubelet[2819]: E1013 05:27:08.542433 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.542454 kubelet[2819]: W1013 05:27:08.542446 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.542454 kubelet[2819]: E1013 05:27:08.542457 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.542654 kubelet[2819]: E1013 05:27:08.542632 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.542654 kubelet[2819]: W1013 05:27:08.542646 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.542654 kubelet[2819]: E1013 05:27:08.542655 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.543011 kubelet[2819]: E1013 05:27:08.542989 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.543011 kubelet[2819]: W1013 05:27:08.543004 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.543084 kubelet[2819]: E1013 05:27:08.543016 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.543244 kubelet[2819]: E1013 05:27:08.543223 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:08.543244 kubelet[2819]: W1013 05:27:08.543237 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:08.543244 kubelet[2819]: E1013 05:27:08.543247 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:08.885581 kubelet[2819]: I1013 05:27:08.885481 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9fc4f485d-2vzqj" podStartSLOduration=2.207591636 podStartE2EDuration="7.885461368s" podCreationTimestamp="2025-10-13 05:27:01 +0000 UTC" firstStartedPulling="2025-10-13 05:27:01.938248672 +0000 UTC m=+28.486840585" lastFinishedPulling="2025-10-13 05:27:07.616118404 +0000 UTC m=+34.164710317" observedRunningTime="2025-10-13 05:27:08.88538794 +0000 UTC m=+35.433979874" watchObservedRunningTime="2025-10-13 05:27:08.885461368 +0000 UTC m=+35.434053291" Oct 13 05:27:09.370485 kubelet[2819]: E1013 05:27:09.370391 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:09.486819 kubelet[2819]: E1013 05:27:09.486785 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:09.542554 kubelet[2819]: E1013 05:27:09.542507 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.542554 kubelet[2819]: W1013 05:27:09.542533 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.542554 kubelet[2819]: E1013 05:27:09.542556 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.542835 kubelet[2819]: E1013 05:27:09.542816 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.542835 kubelet[2819]: W1013 05:27:09.542827 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.542835 kubelet[2819]: E1013 05:27:09.542836 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.543120 kubelet[2819]: E1013 05:27:09.543053 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.543120 kubelet[2819]: W1013 05:27:09.543061 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.543120 kubelet[2819]: E1013 05:27:09.543068 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.543304 kubelet[2819]: E1013 05:27:09.543283 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.543304 kubelet[2819]: W1013 05:27:09.543296 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.543304 kubelet[2819]: E1013 05:27:09.543306 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.543570 kubelet[2819]: E1013 05:27:09.543542 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.543570 kubelet[2819]: W1013 05:27:09.543554 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.543570 kubelet[2819]: E1013 05:27:09.543563 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.543755 kubelet[2819]: E1013 05:27:09.543737 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.543755 kubelet[2819]: W1013 05:27:09.543748 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.543755 kubelet[2819]: E1013 05:27:09.543755 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.544074 kubelet[2819]: E1013 05:27:09.544043 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.544074 kubelet[2819]: W1013 05:27:09.544053 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.544074 kubelet[2819]: E1013 05:27:09.544061 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.544630 kubelet[2819]: E1013 05:27:09.544356 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.544630 kubelet[2819]: W1013 05:27:09.544402 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.544630 kubelet[2819]: E1013 05:27:09.544440 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.544891 kubelet[2819]: E1013 05:27:09.544874 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.545001 kubelet[2819]: W1013 05:27:09.544984 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.545077 kubelet[2819]: E1013 05:27:09.545062 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.545798 kubelet[2819]: E1013 05:27:09.545751 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.545975 kubelet[2819]: W1013 05:27:09.545941 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.546070 kubelet[2819]: E1013 05:27:09.546052 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.546373 kubelet[2819]: E1013 05:27:09.546356 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.546561 kubelet[2819]: W1013 05:27:09.546446 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.546561 kubelet[2819]: E1013 05:27:09.546464 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.547052 kubelet[2819]: E1013 05:27:09.546907 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.547052 kubelet[2819]: W1013 05:27:09.546941 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.547052 kubelet[2819]: E1013 05:27:09.546954 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.547420 kubelet[2819]: E1013 05:27:09.547293 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.547420 kubelet[2819]: W1013 05:27:09.547310 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.547420 kubelet[2819]: E1013 05:27:09.547322 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.547788 kubelet[2819]: E1013 05:27:09.547671 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.547788 kubelet[2819]: W1013 05:27:09.547686 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.547788 kubelet[2819]: E1013 05:27:09.547698 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.548302 kubelet[2819]: E1013 05:27:09.548049 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.548302 kubelet[2819]: W1013 05:27:09.548064 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.548302 kubelet[2819]: E1013 05:27:09.548088 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.548532 kubelet[2819]: E1013 05:27:09.548515 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.548609 kubelet[2819]: W1013 05:27:09.548592 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.548706 kubelet[2819]: E1013 05:27:09.548688 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.549141 kubelet[2819]: E1013 05:27:09.548995 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.549141 kubelet[2819]: W1013 05:27:09.549011 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.549141 kubelet[2819]: E1013 05:27:09.549025 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.549385 kubelet[2819]: E1013 05:27:09.549368 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.549596 kubelet[2819]: W1013 05:27:09.549450 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.549596 kubelet[2819]: E1013 05:27:09.549470 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.549953 kubelet[2819]: E1013 05:27:09.549831 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.549953 kubelet[2819]: W1013 05:27:09.549848 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.549953 kubelet[2819]: E1013 05:27:09.549861 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.550345 kubelet[2819]: E1013 05:27:09.550242 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.550345 kubelet[2819]: W1013 05:27:09.550257 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.550345 kubelet[2819]: E1013 05:27:09.550269 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.550776 kubelet[2819]: E1013 05:27:09.550601 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.550776 kubelet[2819]: W1013 05:27:09.550617 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.550776 kubelet[2819]: E1013 05:27:09.550632 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.551033 kubelet[2819]: E1013 05:27:09.551012 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.551033 kubelet[2819]: W1013 05:27:09.551027 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.551033 kubelet[2819]: E1013 05:27:09.551039 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.553185 kubelet[2819]: E1013 05:27:09.553155 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.553185 kubelet[2819]: W1013 05:27:09.553178 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.553330 kubelet[2819]: E1013 05:27:09.553197 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.553553 kubelet[2819]: E1013 05:27:09.553531 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.553553 kubelet[2819]: W1013 05:27:09.553546 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.553659 kubelet[2819]: E1013 05:27:09.553558 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.553983 kubelet[2819]: E1013 05:27:09.553942 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.553983 kubelet[2819]: W1013 05:27:09.553962 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.553983 kubelet[2819]: E1013 05:27:09.553978 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.554277 kubelet[2819]: E1013 05:27:09.554258 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.554277 kubelet[2819]: W1013 05:27:09.554274 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.554332 kubelet[2819]: E1013 05:27:09.554287 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.554554 kubelet[2819]: E1013 05:27:09.554525 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.554554 kubelet[2819]: W1013 05:27:09.554544 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.554613 kubelet[2819]: E1013 05:27:09.554557 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.554820 kubelet[2819]: E1013 05:27:09.554801 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.554820 kubelet[2819]: W1013 05:27:09.554816 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.554877 kubelet[2819]: E1013 05:27:09.554828 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.555167 kubelet[2819]: E1013 05:27:09.555146 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.555167 kubelet[2819]: W1013 05:27:09.555162 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.555263 kubelet[2819]: E1013 05:27:09.555173 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.555425 kubelet[2819]: E1013 05:27:09.555373 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.555425 kubelet[2819]: W1013 05:27:09.555385 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.555425 kubelet[2819]: E1013 05:27:09.555394 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.555645 kubelet[2819]: E1013 05:27:09.555622 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.555645 kubelet[2819]: W1013 05:27:09.555640 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.555716 kubelet[2819]: E1013 05:27:09.555655 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.556006 kubelet[2819]: E1013 05:27:09.555984 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.556006 kubelet[2819]: W1013 05:27:09.556001 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.556067 kubelet[2819]: E1013 05:27:09.556014 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:09.556747 kubelet[2819]: E1013 05:27:09.556720 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:09.556747 kubelet[2819]: W1013 05:27:09.556735 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:09.556747 kubelet[2819]: E1013 05:27:09.556746 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.489222 kubelet[2819]: E1013 05:27:10.489180 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:10.554010 kubelet[2819]: E1013 05:27:10.553969 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.554010 kubelet[2819]: W1013 05:27:10.553989 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.554010 kubelet[2819]: E1013 05:27:10.554008 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.554223 kubelet[2819]: E1013 05:27:10.554215 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.554246 kubelet[2819]: W1013 05:27:10.554224 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.554246 kubelet[2819]: E1013 05:27:10.554235 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.554510 kubelet[2819]: E1013 05:27:10.554487 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.554510 kubelet[2819]: W1013 05:27:10.554497 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.554510 kubelet[2819]: E1013 05:27:10.554506 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.554687 kubelet[2819]: E1013 05:27:10.554672 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.554687 kubelet[2819]: W1013 05:27:10.554681 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.554734 kubelet[2819]: E1013 05:27:10.554695 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.554962 kubelet[2819]: E1013 05:27:10.554935 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.554962 kubelet[2819]: W1013 05:27:10.554948 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.554962 kubelet[2819]: E1013 05:27:10.554957 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.555142 kubelet[2819]: E1013 05:27:10.555126 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.555142 kubelet[2819]: W1013 05:27:10.555136 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.555194 kubelet[2819]: E1013 05:27:10.555144 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.555318 kubelet[2819]: E1013 05:27:10.555305 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.555318 kubelet[2819]: W1013 05:27:10.555314 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.555384 kubelet[2819]: E1013 05:27:10.555322 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.555486 kubelet[2819]: E1013 05:27:10.555474 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.555486 kubelet[2819]: W1013 05:27:10.555483 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.555549 kubelet[2819]: E1013 05:27:10.555491 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.555661 kubelet[2819]: E1013 05:27:10.555648 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.555661 kubelet[2819]: W1013 05:27:10.555657 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.555724 kubelet[2819]: E1013 05:27:10.555664 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.555824 kubelet[2819]: E1013 05:27:10.555811 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.555824 kubelet[2819]: W1013 05:27:10.555820 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.555887 kubelet[2819]: E1013 05:27:10.555827 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.556014 kubelet[2819]: E1013 05:27:10.555999 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.556014 kubelet[2819]: W1013 05:27:10.556008 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.556079 kubelet[2819]: E1013 05:27:10.556016 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.556211 kubelet[2819]: E1013 05:27:10.556196 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.556211 kubelet[2819]: W1013 05:27:10.556205 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.556257 kubelet[2819]: E1013 05:27:10.556215 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.556523 kubelet[2819]: E1013 05:27:10.556476 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.556523 kubelet[2819]: W1013 05:27:10.556513 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.556575 kubelet[2819]: E1013 05:27:10.556542 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.556839 kubelet[2819]: E1013 05:27:10.556810 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.556839 kubelet[2819]: W1013 05:27:10.556824 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.556839 kubelet[2819]: E1013 05:27:10.556833 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.557035 kubelet[2819]: E1013 05:27:10.557013 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.557035 kubelet[2819]: W1013 05:27:10.557025 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.557035 kubelet[2819]: E1013 05:27:10.557032 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.557349 kubelet[2819]: E1013 05:27:10.557326 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.557349 kubelet[2819]: W1013 05:27:10.557339 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.557349 kubelet[2819]: E1013 05:27:10.557348 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.557549 kubelet[2819]: E1013 05:27:10.557533 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.557549 kubelet[2819]: W1013 05:27:10.557544 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.557605 kubelet[2819]: E1013 05:27:10.557552 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.557764 kubelet[2819]: E1013 05:27:10.557748 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.557764 kubelet[2819]: W1013 05:27:10.557760 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.557813 kubelet[2819]: E1013 05:27:10.557768 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.558028 kubelet[2819]: E1013 05:27:10.558011 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.558098 kubelet[2819]: W1013 05:27:10.558028 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.558098 kubelet[2819]: E1013 05:27:10.558040 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.558242 kubelet[2819]: E1013 05:27:10.558229 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.558242 kubelet[2819]: W1013 05:27:10.558239 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.558282 kubelet[2819]: E1013 05:27:10.558248 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.558415 kubelet[2819]: E1013 05:27:10.558403 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.558415 kubelet[2819]: W1013 05:27:10.558413 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.558461 kubelet[2819]: E1013 05:27:10.558421 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.558633 kubelet[2819]: E1013 05:27:10.558615 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.558633 kubelet[2819]: W1013 05:27:10.558629 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.558684 kubelet[2819]: E1013 05:27:10.558639 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.558953 kubelet[2819]: E1013 05:27:10.558893 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.558953 kubelet[2819]: W1013 05:27:10.558906 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.558953 kubelet[2819]: E1013 05:27:10.558929 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.559115 kubelet[2819]: E1013 05:27:10.559101 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.559115 kubelet[2819]: W1013 05:27:10.559111 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.559164 kubelet[2819]: E1013 05:27:10.559120 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.559297 kubelet[2819]: E1013 05:27:10.559283 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.559297 kubelet[2819]: W1013 05:27:10.559293 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.559363 kubelet[2819]: E1013 05:27:10.559301 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.559501 kubelet[2819]: E1013 05:27:10.559487 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.559501 kubelet[2819]: W1013 05:27:10.559500 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.559548 kubelet[2819]: E1013 05:27:10.559511 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.559703 kubelet[2819]: E1013 05:27:10.559690 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.559703 kubelet[2819]: W1013 05:27:10.559700 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.559801 kubelet[2819]: E1013 05:27:10.559708 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.559902 kubelet[2819]: E1013 05:27:10.559887 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.559902 kubelet[2819]: W1013 05:27:10.559898 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.559967 kubelet[2819]: E1013 05:27:10.559907 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.560186 kubelet[2819]: E1013 05:27:10.560172 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.560186 kubelet[2819]: W1013 05:27:10.560184 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.560235 kubelet[2819]: E1013 05:27:10.560192 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.560393 kubelet[2819]: E1013 05:27:10.560382 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.560393 kubelet[2819]: W1013 05:27:10.560392 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.560440 kubelet[2819]: E1013 05:27:10.560399 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.560668 kubelet[2819]: E1013 05:27:10.560654 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.560668 kubelet[2819]: W1013 05:27:10.560666 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.560721 kubelet[2819]: E1013 05:27:10.560675 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.560862 kubelet[2819]: E1013 05:27:10.560849 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.560862 kubelet[2819]: W1013 05:27:10.560859 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.560937 kubelet[2819]: E1013 05:27:10.560867 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:10.561049 kubelet[2819]: E1013 05:27:10.561037 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:10.561049 kubelet[2819]: W1013 05:27:10.561046 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:10.561170 kubelet[2819]: E1013 05:27:10.561064 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.370598 kubelet[2819]: E1013 05:27:11.370521 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:11.490412 kubelet[2819]: E1013 05:27:11.490373 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:11.562934 kubelet[2819]: E1013 05:27:11.562862 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.562934 kubelet[2819]: W1013 05:27:11.562884 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.562934 kubelet[2819]: E1013 05:27:11.562905 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.563179 kubelet[2819]: E1013 05:27:11.563101 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.563179 kubelet[2819]: W1013 05:27:11.563109 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.563179 kubelet[2819]: E1013 05:27:11.563118 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.563302 kubelet[2819]: E1013 05:27:11.563283 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.563302 kubelet[2819]: W1013 05:27:11.563293 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.563302 kubelet[2819]: E1013 05:27:11.563301 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.563533 kubelet[2819]: E1013 05:27:11.563504 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.563533 kubelet[2819]: W1013 05:27:11.563515 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.563533 kubelet[2819]: E1013 05:27:11.563524 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.563707 kubelet[2819]: E1013 05:27:11.563681 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.563707 kubelet[2819]: W1013 05:27:11.563691 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.563707 kubelet[2819]: E1013 05:27:11.563700 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.563872 kubelet[2819]: E1013 05:27:11.563853 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.563872 kubelet[2819]: W1013 05:27:11.563863 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.563872 kubelet[2819]: E1013 05:27:11.563871 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.564078 kubelet[2819]: E1013 05:27:11.564058 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.564078 kubelet[2819]: W1013 05:27:11.564070 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.564078 kubelet[2819]: E1013 05:27:11.564078 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.564260 kubelet[2819]: E1013 05:27:11.564241 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.564260 kubelet[2819]: W1013 05:27:11.564251 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.564260 kubelet[2819]: E1013 05:27:11.564259 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.564434 kubelet[2819]: E1013 05:27:11.564416 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.564434 kubelet[2819]: W1013 05:27:11.564425 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.564434 kubelet[2819]: E1013 05:27:11.564433 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.564608 kubelet[2819]: E1013 05:27:11.564589 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.564608 kubelet[2819]: W1013 05:27:11.564598 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.564608 kubelet[2819]: E1013 05:27:11.564608 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.564773 kubelet[2819]: E1013 05:27:11.564754 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.564773 kubelet[2819]: W1013 05:27:11.564765 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.564773 kubelet[2819]: E1013 05:27:11.564773 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.564967 kubelet[2819]: E1013 05:27:11.564949 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.564967 kubelet[2819]: W1013 05:27:11.564960 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.564967 kubelet[2819]: E1013 05:27:11.564968 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.565186 kubelet[2819]: E1013 05:27:11.565163 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.565186 kubelet[2819]: W1013 05:27:11.565179 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.565286 kubelet[2819]: E1013 05:27:11.565193 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.565435 kubelet[2819]: E1013 05:27:11.565408 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.565435 kubelet[2819]: W1013 05:27:11.565421 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.565435 kubelet[2819]: E1013 05:27:11.565431 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.565636 kubelet[2819]: E1013 05:27:11.565611 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.565636 kubelet[2819]: W1013 05:27:11.565623 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.565636 kubelet[2819]: E1013 05:27:11.565633 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.663533 kubelet[2819]: E1013 05:27:11.663365 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.663533 kubelet[2819]: W1013 05:27:11.663389 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.663533 kubelet[2819]: E1013 05:27:11.663410 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.663743 kubelet[2819]: E1013 05:27:11.663632 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.663743 kubelet[2819]: W1013 05:27:11.663643 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.663743 kubelet[2819]: E1013 05:27:11.663677 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.663944 kubelet[2819]: E1013 05:27:11.663900 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.663944 kubelet[2819]: W1013 05:27:11.663931 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.663944 kubelet[2819]: E1013 05:27:11.663942 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.664282 kubelet[2819]: E1013 05:27:11.664246 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.664282 kubelet[2819]: W1013 05:27:11.664263 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.664282 kubelet[2819]: E1013 05:27:11.664275 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.664634 kubelet[2819]: E1013 05:27:11.664600 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.664634 kubelet[2819]: W1013 05:27:11.664617 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.664634 kubelet[2819]: E1013 05:27:11.664628 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.665080 kubelet[2819]: E1013 05:27:11.665062 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.665080 kubelet[2819]: W1013 05:27:11.665076 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.665186 kubelet[2819]: E1013 05:27:11.665087 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.665317 kubelet[2819]: E1013 05:27:11.665301 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.665317 kubelet[2819]: W1013 05:27:11.665314 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.665462 kubelet[2819]: E1013 05:27:11.665325 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.665569 kubelet[2819]: E1013 05:27:11.665530 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.665569 kubelet[2819]: W1013 05:27:11.665547 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.665569 kubelet[2819]: E1013 05:27:11.665558 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.665783 kubelet[2819]: E1013 05:27:11.665763 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.665783 kubelet[2819]: W1013 05:27:11.665776 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.665783 kubelet[2819]: E1013 05:27:11.665786 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.666015 kubelet[2819]: E1013 05:27:11.665994 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.666065 kubelet[2819]: W1013 05:27:11.666019 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.666065 kubelet[2819]: E1013 05:27:11.666045 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.666273 kubelet[2819]: E1013 05:27:11.666254 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.666273 kubelet[2819]: W1013 05:27:11.666269 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.666335 kubelet[2819]: E1013 05:27:11.666281 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.666551 kubelet[2819]: E1013 05:27:11.666531 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.666551 kubelet[2819]: W1013 05:27:11.666546 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.666628 kubelet[2819]: E1013 05:27:11.666557 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.666796 kubelet[2819]: E1013 05:27:11.666777 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.666796 kubelet[2819]: W1013 05:27:11.666792 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.666857 kubelet[2819]: E1013 05:27:11.666803 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.667051 kubelet[2819]: E1013 05:27:11.667012 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.667051 kubelet[2819]: W1013 05:27:11.667044 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.667166 kubelet[2819]: E1013 05:27:11.667057 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.667252 kubelet[2819]: E1013 05:27:11.667239 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.667252 kubelet[2819]: W1013 05:27:11.667249 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.667324 kubelet[2819]: E1013 05:27:11.667257 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.667514 kubelet[2819]: E1013 05:27:11.667473 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.667514 kubelet[2819]: W1013 05:27:11.667485 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.667514 kubelet[2819]: E1013 05:27:11.667494 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.667702 kubelet[2819]: E1013 05:27:11.667687 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.667702 kubelet[2819]: W1013 05:27:11.667697 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.667776 kubelet[2819]: E1013 05:27:11.667705 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:11.668165 kubelet[2819]: E1013 05:27:11.668148 2819 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:27:11.668165 kubelet[2819]: W1013 05:27:11.668159 2819 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:27:11.668165 kubelet[2819]: E1013 05:27:11.668167 2819 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:27:12.597446 containerd[1614]: time="2025-10-13T05:27:12.597355319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:12.598203 containerd[1614]: time="2025-10-13T05:27:12.598151602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:27:12.599448 containerd[1614]: time="2025-10-13T05:27:12.599403052Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:12.601451 containerd[1614]: time="2025-10-13T05:27:12.601381626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:12.601968 containerd[1614]: time="2025-10-13T05:27:12.601906591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 4.984892225s" Oct 13 05:27:12.602049 containerd[1614]: time="2025-10-13T05:27:12.601966704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:27:12.606278 containerd[1614]: time="2025-10-13T05:27:12.606245385Z" level=info msg="CreateContainer within sandbox \"1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:27:12.615637 containerd[1614]: time="2025-10-13T05:27:12.615579567Z" level=info msg="Container d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:12.624578 containerd[1614]: time="2025-10-13T05:27:12.624522883Z" level=info msg="CreateContainer within sandbox \"1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9\"" Oct 13 05:27:12.625219 containerd[1614]: time="2025-10-13T05:27:12.625191338Z" level=info msg="StartContainer for \"d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9\"" Oct 13 05:27:12.626703 containerd[1614]: time="2025-10-13T05:27:12.626660435Z" level=info msg="connecting to shim d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9" address="unix:///run/containerd/s/b801a82be18108bd86f8f3ca93ae1e31d3b6ff01cc088da493c70d0fdf5cf901" protocol=ttrpc version=3 Oct 13 05:27:12.656142 systemd[1]: Started cri-containerd-d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9.scope - libcontainer container d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9. Oct 13 05:27:12.723482 systemd[1]: cri-containerd-d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9.scope: Deactivated successfully. Oct 13 05:27:12.724366 systemd[1]: cri-containerd-d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9.scope: Consumed 47ms CPU time, 6.5M memory peak, 4.6M written to disk. Oct 13 05:27:12.726578 containerd[1614]: time="2025-10-13T05:27:12.726525928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9\" id:\"d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9\" pid:3614 exited_at:{seconds:1760333232 nanos:725797400}" Oct 13 05:27:12.826548 containerd[1614]: time="2025-10-13T05:27:12.826463976Z" level=info msg="received exit event container_id:\"d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9\" id:\"d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9\" pid:3614 exited_at:{seconds:1760333232 nanos:725797400}" Oct 13 05:27:12.828576 containerd[1614]: time="2025-10-13T05:27:12.828539252Z" level=info msg="StartContainer for \"d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9\" returns successfully" Oct 13 05:27:12.854325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d61c047100d4a839c9aced4e997cb98d775da6eed0b54a00c27adb44f9a3c4a9-rootfs.mount: Deactivated successfully. Oct 13 05:27:13.370798 kubelet[2819]: E1013 05:27:13.370717 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:15.370964 kubelet[2819]: E1013 05:27:15.370864 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:16.517760 containerd[1614]: time="2025-10-13T05:27:16.517711986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:27:17.370949 kubelet[2819]: E1013 05:27:17.370847 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:19.370850 kubelet[2819]: E1013 05:27:19.370771 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:21.370275 kubelet[2819]: E1013 05:27:21.370190 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:23.371094 kubelet[2819]: E1013 05:27:23.371033 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:24.861414 containerd[1614]: time="2025-10-13T05:27:24.861319442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:24.868209 containerd[1614]: time="2025-10-13T05:27:24.868131568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:27:24.870695 containerd[1614]: time="2025-10-13T05:27:24.870649693Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:24.873598 containerd[1614]: time="2025-10-13T05:27:24.873523373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:24.874289 containerd[1614]: time="2025-10-13T05:27:24.874243302Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 8.356493124s" Oct 13 05:27:24.874289 containerd[1614]: time="2025-10-13T05:27:24.874274972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:27:24.881140 containerd[1614]: time="2025-10-13T05:27:24.881078532Z" level=info msg="CreateContainer within sandbox \"1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:27:24.905162 containerd[1614]: time="2025-10-13T05:27:24.905101873Z" level=info msg="Container 9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:24.917803 containerd[1614]: time="2025-10-13T05:27:24.917722087Z" level=info msg="CreateContainer within sandbox \"1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec\"" Oct 13 05:27:24.918476 containerd[1614]: time="2025-10-13T05:27:24.918445713Z" level=info msg="StartContainer for \"9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec\"" Oct 13 05:27:24.920384 containerd[1614]: time="2025-10-13T05:27:24.920329164Z" level=info msg="connecting to shim 9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec" address="unix:///run/containerd/s/b801a82be18108bd86f8f3ca93ae1e31d3b6ff01cc088da493c70d0fdf5cf901" protocol=ttrpc version=3 Oct 13 05:27:24.947250 systemd[1]: Started cri-containerd-9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec.scope - libcontainer container 9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec. Oct 13 05:27:25.011459 containerd[1614]: time="2025-10-13T05:27:25.011379822Z" level=info msg="StartContainer for \"9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec\" returns successfully" Oct 13 05:27:25.370037 kubelet[2819]: E1013 05:27:25.369956 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:27.370617 kubelet[2819]: E1013 05:27:27.370557 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:29.090510 systemd[1]: cri-containerd-9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec.scope: Deactivated successfully. Oct 13 05:27:29.090933 systemd[1]: cri-containerd-9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec.scope: Consumed 621ms CPU time, 176.8M memory peak, 3M read from disk, 171.3M written to disk. Oct 13 05:27:29.092279 containerd[1614]: time="2025-10-13T05:27:29.091981970Z" level=info msg="received exit event container_id:\"9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec\" id:\"9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec\" pid:3676 exited_at:{seconds:1760333249 nanos:91169639}" Oct 13 05:27:29.092279 containerd[1614]: time="2025-10-13T05:27:29.092080228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec\" id:\"9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec\" pid:3676 exited_at:{seconds:1760333249 nanos:91169639}" Oct 13 05:27:29.121470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c612df36e850f2094f89379a9c3f9d553e4399e3890b629a0b0192cdc9850ec-rootfs.mount: Deactivated successfully. Oct 13 05:27:29.209604 kubelet[2819]: I1013 05:27:29.159760 2819 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:27:29.379704 systemd[1]: Created slice kubepods-besteffort-pod762f9e88_d9dd_4f94_bef9_b3a498513c70.slice - libcontainer container kubepods-besteffort-pod762f9e88_d9dd_4f94_bef9_b3a498513c70.slice. Oct 13 05:27:30.240436 containerd[1614]: time="2025-10-13T05:27:30.240179865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gggtg,Uid:762f9e88-d9dd-4f94-bef9-b3a498513c70,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:30.249195 systemd[1]: Created slice kubepods-burstable-pod0d256d4f_6316_488a_8e01_b25dcf74417b.slice - libcontainer container kubepods-burstable-pod0d256d4f_6316_488a_8e01_b25dcf74417b.slice. Oct 13 05:27:30.286560 systemd[1]: Created slice kubepods-besteffort-pod4a651b8a_51d0_42d7_b97b_59845495263f.slice - libcontainer container kubepods-besteffort-pod4a651b8a_51d0_42d7_b97b_59845495263f.slice. Oct 13 05:27:30.295141 kubelet[2819]: I1013 05:27:30.294941 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d256d4f-6316-488a-8e01-b25dcf74417b-config-volume\") pod \"coredns-66bc5c9577-84qpn\" (UID: \"0d256d4f-6316-488a-8e01-b25dcf74417b\") " pod="kube-system/coredns-66bc5c9577-84qpn" Oct 13 05:27:30.295141 kubelet[2819]: I1013 05:27:30.294986 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a651b8a-51d0-42d7-b97b-59845495263f-tigera-ca-bundle\") pod \"calico-kube-controllers-5968f678cf-sx9rw\" (UID: \"4a651b8a-51d0-42d7-b97b-59845495263f\") " pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" Oct 13 05:27:30.295141 kubelet[2819]: I1013 05:27:30.295010 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmzk\" (UniqueName: \"kubernetes.io/projected/4a651b8a-51d0-42d7-b97b-59845495263f-kube-api-access-bzmzk\") pod \"calico-kube-controllers-5968f678cf-sx9rw\" (UID: \"4a651b8a-51d0-42d7-b97b-59845495263f\") " pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" Oct 13 05:27:30.295141 kubelet[2819]: I1013 05:27:30.295034 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5xz\" (UniqueName: \"kubernetes.io/projected/0d256d4f-6316-488a-8e01-b25dcf74417b-kube-api-access-7q5xz\") pod \"coredns-66bc5c9577-84qpn\" (UID: \"0d256d4f-6316-488a-8e01-b25dcf74417b\") " pod="kube-system/coredns-66bc5c9577-84qpn" Oct 13 05:27:30.295141 kubelet[2819]: I1013 05:27:30.295052 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqs5\" (UniqueName: \"kubernetes.io/projected/d4cacd58-560a-4299-8f08-e71296199ad7-kube-api-access-kmqs5\") pod \"calico-apiserver-5c64875bd7-9fqrj\" (UID: \"d4cacd58-560a-4299-8f08-e71296199ad7\") " pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" Oct 13 05:27:30.297429 kubelet[2819]: I1013 05:27:30.295453 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwxg9\" (UniqueName: \"kubernetes.io/projected/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-kube-api-access-zwxg9\") pod \"whisker-66d5966dd-jvbwj\" (UID: \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\") " pod="calico-system/whisker-66d5966dd-jvbwj" Oct 13 05:27:30.297429 kubelet[2819]: I1013 05:27:30.295538 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-backend-key-pair\") pod \"whisker-66d5966dd-jvbwj\" (UID: \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\") " pod="calico-system/whisker-66d5966dd-jvbwj" Oct 13 05:27:30.300074 kubelet[2819]: I1013 05:27:30.295560 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-ca-bundle\") pod \"whisker-66d5966dd-jvbwj\" (UID: \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\") " pod="calico-system/whisker-66d5966dd-jvbwj" Oct 13 05:27:30.300074 kubelet[2819]: I1013 05:27:30.298875 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4cacd58-560a-4299-8f08-e71296199ad7-calico-apiserver-certs\") pod \"calico-apiserver-5c64875bd7-9fqrj\" (UID: \"d4cacd58-560a-4299-8f08-e71296199ad7\") " pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" Oct 13 05:27:30.302749 systemd[1]: Created slice kubepods-besteffort-pod4acb0fa4_ab06_4079_9ed1_c9b406f95ac2.slice - libcontainer container kubepods-besteffort-pod4acb0fa4_ab06_4079_9ed1_c9b406f95ac2.slice. Oct 13 05:27:30.320044 systemd[1]: Created slice kubepods-besteffort-podd4cacd58_560a_4299_8f08_e71296199ad7.slice - libcontainer container kubepods-besteffort-podd4cacd58_560a_4299_8f08_e71296199ad7.slice. Oct 13 05:27:30.332445 systemd[1]: Created slice kubepods-besteffort-podb0c63581_1952_443f_8230_1187b1b3acad.slice - libcontainer container kubepods-besteffort-podb0c63581_1952_443f_8230_1187b1b3acad.slice. Oct 13 05:27:30.353835 systemd[1]: Created slice kubepods-besteffort-pod325a437e_4d94_40e0_ba26_f0ae3ef19e76.slice - libcontainer container kubepods-besteffort-pod325a437e_4d94_40e0_ba26_f0ae3ef19e76.slice. Oct 13 05:27:30.374882 systemd[1]: Created slice kubepods-besteffort-pod1479d4a9_8275_4077_8b49_3a9f1a6a6634.slice - libcontainer container kubepods-besteffort-pod1479d4a9_8275_4077_8b49_3a9f1a6a6634.slice. Oct 13 05:27:30.386551 systemd[1]: Created slice kubepods-burstable-pod190de19b_7d74_4b26_8ad5_9c7bdc2cd0a8.slice - libcontainer container kubepods-burstable-pod190de19b_7d74_4b26_8ad5_9c7bdc2cd0a8.slice. Oct 13 05:27:30.401976 kubelet[2819]: I1013 05:27:30.401260 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8-config-volume\") pod \"coredns-66bc5c9577-hsmgj\" (UID: \"190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8\") " pod="kube-system/coredns-66bc5c9577-hsmgj" Oct 13 05:27:30.401976 kubelet[2819]: I1013 05:27:30.401630 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0c63581-1952-443f-8230-1187b1b3acad-goldmane-ca-bundle\") pod \"goldmane-854f97d977-hkddf\" (UID: \"b0c63581-1952-443f-8230-1187b1b3acad\") " pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:30.401976 kubelet[2819]: I1013 05:27:30.401663 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvp64\" (UniqueName: \"kubernetes.io/projected/325a437e-4d94-40e0-ba26-f0ae3ef19e76-kube-api-access-vvp64\") pod \"calico-apiserver-57ccb6c94f-mm55g\" (UID: \"325a437e-4d94-40e0-ba26-f0ae3ef19e76\") " pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" Oct 13 05:27:30.401976 kubelet[2819]: I1013 05:27:30.401679 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tktg\" (UniqueName: \"kubernetes.io/projected/190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8-kube-api-access-8tktg\") pod \"coredns-66bc5c9577-hsmgj\" (UID: \"190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8\") " pod="kube-system/coredns-66bc5c9577-hsmgj" Oct 13 05:27:30.401976 kubelet[2819]: I1013 05:27:30.401734 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56t7p\" (UniqueName: \"kubernetes.io/projected/b0c63581-1952-443f-8230-1187b1b3acad-kube-api-access-56t7p\") pod \"goldmane-854f97d977-hkddf\" (UID: \"b0c63581-1952-443f-8230-1187b1b3acad\") " pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:30.402341 kubelet[2819]: I1013 05:27:30.401752 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnhp\" (UniqueName: \"kubernetes.io/projected/1479d4a9-8275-4077-8b49-3a9f1a6a6634-kube-api-access-hwnhp\") pod \"calico-apiserver-57ccb6c94f-brhgw\" (UID: \"1479d4a9-8275-4077-8b49-3a9f1a6a6634\") " pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" Oct 13 05:27:30.402341 kubelet[2819]: I1013 05:27:30.401769 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b0c63581-1952-443f-8230-1187b1b3acad-goldmane-key-pair\") pod \"goldmane-854f97d977-hkddf\" (UID: \"b0c63581-1952-443f-8230-1187b1b3acad\") " pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:30.402341 kubelet[2819]: I1013 05:27:30.401793 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0c63581-1952-443f-8230-1187b1b3acad-config\") pod \"goldmane-854f97d977-hkddf\" (UID: \"b0c63581-1952-443f-8230-1187b1b3acad\") " pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:30.402341 kubelet[2819]: I1013 05:27:30.401811 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1479d4a9-8275-4077-8b49-3a9f1a6a6634-calico-apiserver-certs\") pod \"calico-apiserver-57ccb6c94f-brhgw\" (UID: \"1479d4a9-8275-4077-8b49-3a9f1a6a6634\") " pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" Oct 13 05:27:30.402341 kubelet[2819]: I1013 05:27:30.401826 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/325a437e-4d94-40e0-ba26-f0ae3ef19e76-calico-apiserver-certs\") pod \"calico-apiserver-57ccb6c94f-mm55g\" (UID: \"325a437e-4d94-40e0-ba26-f0ae3ef19e76\") " pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" Oct 13 05:27:30.474999 containerd[1614]: time="2025-10-13T05:27:30.474785610Z" level=error msg="Failed to destroy network for sandbox \"4d399e5a3aae7f97c7b764dc32ade7994f1cd619b2eb63fce43f970ef722da58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.478448 containerd[1614]: time="2025-10-13T05:27:30.478382388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gggtg,Uid:762f9e88-d9dd-4f94-bef9-b3a498513c70,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d399e5a3aae7f97c7b764dc32ade7994f1cd619b2eb63fce43f970ef722da58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.493308 kubelet[2819]: E1013 05:27:30.493121 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d399e5a3aae7f97c7b764dc32ade7994f1cd619b2eb63fce43f970ef722da58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.493308 kubelet[2819]: E1013 05:27:30.493230 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d399e5a3aae7f97c7b764dc32ade7994f1cd619b2eb63fce43f970ef722da58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:30.493308 kubelet[2819]: E1013 05:27:30.493250 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d399e5a3aae7f97c7b764dc32ade7994f1cd619b2eb63fce43f970ef722da58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:30.493647 kubelet[2819]: E1013 05:27:30.493337 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gggtg_calico-system(762f9e88-d9dd-4f94-bef9-b3a498513c70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gggtg_calico-system(762f9e88-d9dd-4f94-bef9-b3a498513c70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d399e5a3aae7f97c7b764dc32ade7994f1cd619b2eb63fce43f970ef722da58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:30.555500 containerd[1614]: time="2025-10-13T05:27:30.555455378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:27:30.563238 kubelet[2819]: E1013 05:27:30.563197 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:30.564111 containerd[1614]: time="2025-10-13T05:27:30.564057496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84qpn,Uid:0d256d4f-6316-488a-8e01-b25dcf74417b,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:30.601665 containerd[1614]: time="2025-10-13T05:27:30.601479627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5968f678cf-sx9rw,Uid:4a651b8a-51d0-42d7-b97b-59845495263f,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:30.620308 containerd[1614]: time="2025-10-13T05:27:30.620242346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d5966dd-jvbwj,Uid:4acb0fa4-ab06-4079-9ed1-c9b406f95ac2,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:30.636442 containerd[1614]: time="2025-10-13T05:27:30.636359476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c64875bd7-9fqrj,Uid:d4cacd58-560a-4299-8f08-e71296199ad7,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:30.647590 containerd[1614]: time="2025-10-13T05:27:30.647151910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hkddf,Uid:b0c63581-1952-443f-8230-1187b1b3acad,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:30.673563 containerd[1614]: time="2025-10-13T05:27:30.672990006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-mm55g,Uid:325a437e-4d94-40e0-ba26-f0ae3ef19e76,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:30.677210 containerd[1614]: time="2025-10-13T05:27:30.677126509Z" level=error msg="Failed to destroy network for sandbox \"b0eb073b3eca9f24317d9e17c7c6fd050c727190ce65556682fcc633510be8d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.686021 containerd[1614]: time="2025-10-13T05:27:30.685955433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-brhgw,Uid:1479d4a9-8275-4077-8b49-3a9f1a6a6634,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:30.700270 kubelet[2819]: E1013 05:27:30.700086 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:30.704615 containerd[1614]: time="2025-10-13T05:27:30.704439527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hsmgj,Uid:190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:30.714808 containerd[1614]: time="2025-10-13T05:27:30.714744015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84qpn,Uid:0d256d4f-6316-488a-8e01-b25dcf74417b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0eb073b3eca9f24317d9e17c7c6fd050c727190ce65556682fcc633510be8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.715636 kubelet[2819]: E1013 05:27:30.715557 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0eb073b3eca9f24317d9e17c7c6fd050c727190ce65556682fcc633510be8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.716958 kubelet[2819]: E1013 05:27:30.716321 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0eb073b3eca9f24317d9e17c7c6fd050c727190ce65556682fcc633510be8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-84qpn" Oct 13 05:27:30.716958 kubelet[2819]: E1013 05:27:30.716496 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0eb073b3eca9f24317d9e17c7c6fd050c727190ce65556682fcc633510be8d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-84qpn" Oct 13 05:27:30.719283 kubelet[2819]: E1013 05:27:30.717896 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-84qpn_kube-system(0d256d4f-6316-488a-8e01-b25dcf74417b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-84qpn_kube-system(0d256d4f-6316-488a-8e01-b25dcf74417b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0eb073b3eca9f24317d9e17c7c6fd050c727190ce65556682fcc633510be8d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-84qpn" podUID="0d256d4f-6316-488a-8e01-b25dcf74417b" Oct 13 05:27:30.756166 containerd[1614]: time="2025-10-13T05:27:30.755360219Z" level=error msg="Failed to destroy network for sandbox \"1b19d352071e0af89dcba6587b57f284cf94c3218c32ddddbcffb6311de8609c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.758456 containerd[1614]: time="2025-10-13T05:27:30.758396760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5968f678cf-sx9rw,Uid:4a651b8a-51d0-42d7-b97b-59845495263f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b19d352071e0af89dcba6587b57f284cf94c3218c32ddddbcffb6311de8609c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.758848 kubelet[2819]: E1013 05:27:30.758764 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b19d352071e0af89dcba6587b57f284cf94c3218c32ddddbcffb6311de8609c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.759088 kubelet[2819]: E1013 05:27:30.759061 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b19d352071e0af89dcba6587b57f284cf94c3218c32ddddbcffb6311de8609c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" Oct 13 05:27:30.759258 kubelet[2819]: E1013 05:27:30.759178 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b19d352071e0af89dcba6587b57f284cf94c3218c32ddddbcffb6311de8609c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" Oct 13 05:27:30.759592 kubelet[2819]: E1013 05:27:30.759355 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5968f678cf-sx9rw_calico-system(4a651b8a-51d0-42d7-b97b-59845495263f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5968f678cf-sx9rw_calico-system(4a651b8a-51d0-42d7-b97b-59845495263f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b19d352071e0af89dcba6587b57f284cf94c3218c32ddddbcffb6311de8609c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" podUID="4a651b8a-51d0-42d7-b97b-59845495263f" Oct 13 05:27:30.764695 containerd[1614]: time="2025-10-13T05:27:30.764636292Z" level=error msg="Failed to destroy network for sandbox \"993e961effd77da9f6c9c21eccbae1b547b0c9dd15cb62bbb40f48edb7b76c15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.820473 containerd[1614]: time="2025-10-13T05:27:30.820408139Z" level=error msg="Failed to destroy network for sandbox \"be25326d26827eb2987f54e0f553990929d9bab688a3a33080d7a1bbf974c66c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.825536 containerd[1614]: time="2025-10-13T05:27:30.825487773Z" level=error msg="Failed to destroy network for sandbox \"2dddda258e035e0e442407030179e2617efe7e4c53d6ec04f61f644dedf0f047\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.826463 containerd[1614]: time="2025-10-13T05:27:30.826433288Z" level=error msg="Failed to destroy network for sandbox \"ff001f17d10ccdc083cd1aa32c98984ae3fc5b286f0eaa5fdfbde9e21b016f14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.831937 containerd[1614]: time="2025-10-13T05:27:30.831859799Z" level=error msg="Failed to destroy network for sandbox \"6b35bc03465fddc69f63fb929357f2d8ea45f5b0c3b5c30ec879e74b7b93a8e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.837226 containerd[1614]: time="2025-10-13T05:27:30.837092917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c64875bd7-9fqrj,Uid:d4cacd58-560a-4299-8f08-e71296199ad7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"993e961effd77da9f6c9c21eccbae1b547b0c9dd15cb62bbb40f48edb7b76c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.837630 kubelet[2819]: E1013 05:27:30.837572 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993e961effd77da9f6c9c21eccbae1b547b0c9dd15cb62bbb40f48edb7b76c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.837721 kubelet[2819]: E1013 05:27:30.837658 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993e961effd77da9f6c9c21eccbae1b547b0c9dd15cb62bbb40f48edb7b76c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" Oct 13 05:27:30.837721 kubelet[2819]: E1013 05:27:30.837684 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993e961effd77da9f6c9c21eccbae1b547b0c9dd15cb62bbb40f48edb7b76c15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" Oct 13 05:27:30.837797 kubelet[2819]: E1013 05:27:30.837757 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c64875bd7-9fqrj_calico-apiserver(d4cacd58-560a-4299-8f08-e71296199ad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c64875bd7-9fqrj_calico-apiserver(d4cacd58-560a-4299-8f08-e71296199ad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"993e961effd77da9f6c9c21eccbae1b547b0c9dd15cb62bbb40f48edb7b76c15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" podUID="d4cacd58-560a-4299-8f08-e71296199ad7" Oct 13 05:27:30.850241 containerd[1614]: time="2025-10-13T05:27:30.850170922Z" level=error msg="Failed to destroy network for sandbox \"3d00b674825903609849de19defeb814e15b4fd084e194eefce6e527a47530a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.859559 containerd[1614]: time="2025-10-13T05:27:30.859481240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hkddf,Uid:b0c63581-1952-443f-8230-1187b1b3acad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be25326d26827eb2987f54e0f553990929d9bab688a3a33080d7a1bbf974c66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.859946 kubelet[2819]: E1013 05:27:30.859874 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be25326d26827eb2987f54e0f553990929d9bab688a3a33080d7a1bbf974c66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.860006 kubelet[2819]: E1013 05:27:30.859984 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be25326d26827eb2987f54e0f553990929d9bab688a3a33080d7a1bbf974c66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:30.860040 kubelet[2819]: E1013 05:27:30.860009 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be25326d26827eb2987f54e0f553990929d9bab688a3a33080d7a1bbf974c66c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:30.860121 kubelet[2819]: E1013 05:27:30.860076 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-hkddf_calico-system(b0c63581-1952-443f-8230-1187b1b3acad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-hkddf_calico-system(b0c63581-1952-443f-8230-1187b1b3acad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be25326d26827eb2987f54e0f553990929d9bab688a3a33080d7a1bbf974c66c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-hkddf" podUID="b0c63581-1952-443f-8230-1187b1b3acad" Oct 13 05:27:30.936570 containerd[1614]: time="2025-10-13T05:27:30.936432106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-mm55g,Uid:325a437e-4d94-40e0-ba26-f0ae3ef19e76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dddda258e035e0e442407030179e2617efe7e4c53d6ec04f61f644dedf0f047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.936844 kubelet[2819]: E1013 05:27:30.936783 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dddda258e035e0e442407030179e2617efe7e4c53d6ec04f61f644dedf0f047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.936906 kubelet[2819]: E1013 05:27:30.936865 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dddda258e035e0e442407030179e2617efe7e4c53d6ec04f61f644dedf0f047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" Oct 13 05:27:30.936970 kubelet[2819]: E1013 05:27:30.936908 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dddda258e035e0e442407030179e2617efe7e4c53d6ec04f61f644dedf0f047\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" Oct 13 05:27:30.937051 kubelet[2819]: E1013 05:27:30.937009 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57ccb6c94f-mm55g_calico-apiserver(325a437e-4d94-40e0-ba26-f0ae3ef19e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57ccb6c94f-mm55g_calico-apiserver(325a437e-4d94-40e0-ba26-f0ae3ef19e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dddda258e035e0e442407030179e2617efe7e4c53d6ec04f61f644dedf0f047\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" podUID="325a437e-4d94-40e0-ba26-f0ae3ef19e76" Oct 13 05:27:30.962270 containerd[1614]: time="2025-10-13T05:27:30.962192793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d5966dd-jvbwj,Uid:4acb0fa4-ab06-4079-9ed1-c9b406f95ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff001f17d10ccdc083cd1aa32c98984ae3fc5b286f0eaa5fdfbde9e21b016f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.962599 kubelet[2819]: E1013 05:27:30.962542 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff001f17d10ccdc083cd1aa32c98984ae3fc5b286f0eaa5fdfbde9e21b016f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.962678 kubelet[2819]: E1013 05:27:30.962632 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff001f17d10ccdc083cd1aa32c98984ae3fc5b286f0eaa5fdfbde9e21b016f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66d5966dd-jvbwj" Oct 13 05:27:30.962678 kubelet[2819]: E1013 05:27:30.962659 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff001f17d10ccdc083cd1aa32c98984ae3fc5b286f0eaa5fdfbde9e21b016f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66d5966dd-jvbwj" Oct 13 05:27:30.962757 kubelet[2819]: E1013 05:27:30.962718 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66d5966dd-jvbwj_calico-system(4acb0fa4-ab06-4079-9ed1-c9b406f95ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66d5966dd-jvbwj_calico-system(4acb0fa4-ab06-4079-9ed1-c9b406f95ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff001f17d10ccdc083cd1aa32c98984ae3fc5b286f0eaa5fdfbde9e21b016f14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66d5966dd-jvbwj" podUID="4acb0fa4-ab06-4079-9ed1-c9b406f95ac2" Oct 13 05:27:30.966326 containerd[1614]: time="2025-10-13T05:27:30.966244254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-brhgw,Uid:1479d4a9-8275-4077-8b49-3a9f1a6a6634,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b35bc03465fddc69f63fb929357f2d8ea45f5b0c3b5c30ec879e74b7b93a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.966695 kubelet[2819]: E1013 05:27:30.966642 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b35bc03465fddc69f63fb929357f2d8ea45f5b0c3b5c30ec879e74b7b93a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:30.966771 kubelet[2819]: E1013 05:27:30.966709 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b35bc03465fddc69f63fb929357f2d8ea45f5b0c3b5c30ec879e74b7b93a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" Oct 13 05:27:30.966771 kubelet[2819]: E1013 05:27:30.966759 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b35bc03465fddc69f63fb929357f2d8ea45f5b0c3b5c30ec879e74b7b93a8e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" Oct 13 05:27:30.966898 kubelet[2819]: E1013 05:27:30.966819 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57ccb6c94f-brhgw_calico-apiserver(1479d4a9-8275-4077-8b49-3a9f1a6a6634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57ccb6c94f-brhgw_calico-apiserver(1479d4a9-8275-4077-8b49-3a9f1a6a6634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b35bc03465fddc69f63fb929357f2d8ea45f5b0c3b5c30ec879e74b7b93a8e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" podUID="1479d4a9-8275-4077-8b49-3a9f1a6a6634" Oct 13 05:27:31.033058 containerd[1614]: time="2025-10-13T05:27:31.032816803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hsmgj,Uid:190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00b674825903609849de19defeb814e15b4fd084e194eefce6e527a47530a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:31.033339 kubelet[2819]: E1013 05:27:31.033260 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00b674825903609849de19defeb814e15b4fd084e194eefce6e527a47530a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:31.033522 kubelet[2819]: E1013 05:27:31.033348 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00b674825903609849de19defeb814e15b4fd084e194eefce6e527a47530a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hsmgj" Oct 13 05:27:31.033522 kubelet[2819]: E1013 05:27:31.033372 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d00b674825903609849de19defeb814e15b4fd084e194eefce6e527a47530a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hsmgj" Oct 13 05:27:31.033522 kubelet[2819]: E1013 05:27:31.033436 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hsmgj_kube-system(190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hsmgj_kube-system(190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d00b674825903609849de19defeb814e15b4fd084e194eefce6e527a47530a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hsmgj" podUID="190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8" Oct 13 05:27:31.280175 systemd[1]: run-netns-cni\x2d531302bb\x2db2b3\x2d9e5e\x2d1e68\x2d13d68bad70e4.mount: Deactivated successfully. Oct 13 05:27:38.727785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091723285.mount: Deactivated successfully. Oct 13 05:27:44.274686 containerd[1614]: time="2025-10-13T05:27:44.274566804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gggtg,Uid:762f9e88-d9dd-4f94-bef9-b3a498513c70,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:44.399531 containerd[1614]: time="2025-10-13T05:27:44.399434143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-brhgw,Uid:1479d4a9-8275-4077-8b49-3a9f1a6a6634,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:44.424176 containerd[1614]: time="2025-10-13T05:27:44.424110403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d5966dd-jvbwj,Uid:4acb0fa4-ab06-4079-9ed1-c9b406f95ac2,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:44.477906 kubelet[2819]: E1013 05:27:44.477581 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:44.478973 containerd[1614]: time="2025-10-13T05:27:44.478741671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84qpn,Uid:0d256d4f-6316-488a-8e01-b25dcf74417b,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:44.482027 kubelet[2819]: E1013 05:27:44.481959 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:44.482673 containerd[1614]: time="2025-10-13T05:27:44.482620677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:44.486659 containerd[1614]: time="2025-10-13T05:27:44.483898022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hsmgj,Uid:190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:44.486659 containerd[1614]: time="2025-10-13T05:27:44.486073450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c64875bd7-9fqrj,Uid:d4cacd58-560a-4299-8f08-e71296199ad7,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:44.489654 containerd[1614]: time="2025-10-13T05:27:44.489609994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-mm55g,Uid:325a437e-4d94-40e0-ba26-f0ae3ef19e76,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:44.490244 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:41738.service - OpenSSH per-connection server daemon (10.0.0.1:41738). Oct 13 05:27:44.505977 containerd[1614]: time="2025-10-13T05:27:44.505234976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5968f678cf-sx9rw,Uid:4a651b8a-51d0-42d7-b97b-59845495263f,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:44.512432 containerd[1614]: time="2025-10-13T05:27:44.512377986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:27:44.547034 containerd[1614]: time="2025-10-13T05:27:44.546606381Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:44.655822 sshd[4020]: Accepted publickey for core from 10.0.0.1 port 41738 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:44.657732 sshd-session[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:44.674221 containerd[1614]: time="2025-10-13T05:27:44.674149922Z" level=error msg="Failed to destroy network for sandbox \"4c5f931c128280002119496a8b3db8cbc3e36d5a54ea298bf98098d7200f25a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.675904 systemd-logind[1585]: New session 10 of user core. Oct 13 05:27:44.683199 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:27:44.693599 containerd[1614]: time="2025-10-13T05:27:44.693479619Z" level=error msg="Failed to destroy network for sandbox \"fc3a0dd57a59c0e0e697c2cd89892701adb960cad5423f310fbf36157a7e79bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.696272 containerd[1614]: time="2025-10-13T05:27:44.696214073Z" level=error msg="Failed to destroy network for sandbox \"f0db2399654ad62a57f5be722d2c9f01d097315fa8a7e7f636b51f0d508c6d1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.714867 containerd[1614]: time="2025-10-13T05:27:44.714787938Z" level=error msg="Failed to destroy network for sandbox \"4e8b4a5fde718a5a523bf3bd66522644c18d2e5f2a4224166ca768db79f6afdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.718711 containerd[1614]: time="2025-10-13T05:27:44.718642278Z" level=error msg="Failed to destroy network for sandbox \"6877fc28b6983953621d5baa820e4f027d41f9417a7695e17885efb8e8a6111d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.731864 containerd[1614]: time="2025-10-13T05:27:44.731806808Z" level=error msg="Failed to destroy network for sandbox \"4dc4cdc7bae79bb2e435371de1a04990e082ddc75d2933db81ab04415361d17c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.735822 containerd[1614]: time="2025-10-13T05:27:44.735673942Z" level=error msg="Failed to destroy network for sandbox \"5a442efdfe00ea704169ec35d13c0ae54a8a05faf7d5992609ea5d171a0ee2e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.759226 containerd[1614]: time="2025-10-13T05:27:44.759177537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:44.759893 containerd[1614]: time="2025-10-13T05:27:44.759864858Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 14.204036754s" Oct 13 05:27:44.759971 containerd[1614]: time="2025-10-13T05:27:44.759896568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:27:44.771557 containerd[1614]: time="2025-10-13T05:27:44.771467311Z" level=error msg="Failed to destroy network for sandbox \"9efd7dc3454f3417d8ab207410f1ee35cd8496354a05e61e3bc91661204d262e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.810888 containerd[1614]: time="2025-10-13T05:27:44.810681551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84qpn,Uid:0d256d4f-6316-488a-8e01-b25dcf74417b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5f931c128280002119496a8b3db8cbc3e36d5a54ea298bf98098d7200f25a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.811096 kubelet[2819]: E1013 05:27:44.811033 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5f931c128280002119496a8b3db8cbc3e36d5a54ea298bf98098d7200f25a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.811186 kubelet[2819]: E1013 05:27:44.811102 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5f931c128280002119496a8b3db8cbc3e36d5a54ea298bf98098d7200f25a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-84qpn" Oct 13 05:27:44.811186 kubelet[2819]: E1013 05:27:44.811130 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5f931c128280002119496a8b3db8cbc3e36d5a54ea298bf98098d7200f25a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-84qpn" Oct 13 05:27:44.811250 kubelet[2819]: E1013 05:27:44.811205 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-84qpn_kube-system(0d256d4f-6316-488a-8e01-b25dcf74417b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-84qpn_kube-system(0d256d4f-6316-488a-8e01-b25dcf74417b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c5f931c128280002119496a8b3db8cbc3e36d5a54ea298bf98098d7200f25a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-84qpn" podUID="0d256d4f-6316-488a-8e01-b25dcf74417b" Oct 13 05:27:44.856160 containerd[1614]: time="2025-10-13T05:27:44.856083912Z" level=info msg="CreateContainer within sandbox \"1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:27:44.860075 containerd[1614]: time="2025-10-13T05:27:44.858606241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c64875bd7-9fqrj,Uid:d4cacd58-560a-4299-8f08-e71296199ad7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a0dd57a59c0e0e697c2cd89892701adb960cad5423f310fbf36157a7e79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.860300 kubelet[2819]: E1013 05:27:44.859141 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a0dd57a59c0e0e697c2cd89892701adb960cad5423f310fbf36157a7e79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.860300 kubelet[2819]: E1013 05:27:44.859217 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a0dd57a59c0e0e697c2cd89892701adb960cad5423f310fbf36157a7e79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" Oct 13 05:27:44.860300 kubelet[2819]: E1013 05:27:44.859249 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc3a0dd57a59c0e0e697c2cd89892701adb960cad5423f310fbf36157a7e79bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" Oct 13 05:27:44.860458 kubelet[2819]: E1013 05:27:44.859321 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c64875bd7-9fqrj_calico-apiserver(d4cacd58-560a-4299-8f08-e71296199ad7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c64875bd7-9fqrj_calico-apiserver(d4cacd58-560a-4299-8f08-e71296199ad7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc3a0dd57a59c0e0e697c2cd89892701adb960cad5423f310fbf36157a7e79bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" podUID="d4cacd58-560a-4299-8f08-e71296199ad7" Oct 13 05:27:44.887076 containerd[1614]: time="2025-10-13T05:27:44.886792845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gggtg,Uid:762f9e88-d9dd-4f94-bef9-b3a498513c70,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0db2399654ad62a57f5be722d2c9f01d097315fa8a7e7f636b51f0d508c6d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.887359 kubelet[2819]: E1013 05:27:44.887259 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0db2399654ad62a57f5be722d2c9f01d097315fa8a7e7f636b51f0d508c6d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.887359 kubelet[2819]: E1013 05:27:44.887350 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0db2399654ad62a57f5be722d2c9f01d097315fa8a7e7f636b51f0d508c6d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:44.887459 kubelet[2819]: E1013 05:27:44.887373 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0db2399654ad62a57f5be722d2c9f01d097315fa8a7e7f636b51f0d508c6d1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gggtg" Oct 13 05:27:44.887496 kubelet[2819]: E1013 05:27:44.887458 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gggtg_calico-system(762f9e88-d9dd-4f94-bef9-b3a498513c70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gggtg_calico-system(762f9e88-d9dd-4f94-bef9-b3a498513c70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0db2399654ad62a57f5be722d2c9f01d097315fa8a7e7f636b51f0d508c6d1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gggtg" podUID="762f9e88-d9dd-4f94-bef9-b3a498513c70" Oct 13 05:27:44.944684 containerd[1614]: time="2025-10-13T05:27:44.944597594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66d5966dd-jvbwj,Uid:4acb0fa4-ab06-4079-9ed1-c9b406f95ac2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8b4a5fde718a5a523bf3bd66522644c18d2e5f2a4224166ca768db79f6afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.945201 kubelet[2819]: E1013 05:27:44.945136 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8b4a5fde718a5a523bf3bd66522644c18d2e5f2a4224166ca768db79f6afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:44.945298 kubelet[2819]: E1013 05:27:44.945203 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8b4a5fde718a5a523bf3bd66522644c18d2e5f2a4224166ca768db79f6afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66d5966dd-jvbwj" Oct 13 05:27:44.945298 kubelet[2819]: E1013 05:27:44.945228 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e8b4a5fde718a5a523bf3bd66522644c18d2e5f2a4224166ca768db79f6afdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66d5966dd-jvbwj" Oct 13 05:27:44.945364 kubelet[2819]: E1013 05:27:44.945306 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66d5966dd-jvbwj_calico-system(4acb0fa4-ab06-4079-9ed1-c9b406f95ac2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66d5966dd-jvbwj_calico-system(4acb0fa4-ab06-4079-9ed1-c9b406f95ac2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e8b4a5fde718a5a523bf3bd66522644c18d2e5f2a4224166ca768db79f6afdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66d5966dd-jvbwj" podUID="4acb0fa4-ab06-4079-9ed1-c9b406f95ac2" Oct 13 05:27:45.070393 containerd[1614]: time="2025-10-13T05:27:45.070116293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-brhgw,Uid:1479d4a9-8275-4077-8b49-3a9f1a6a6634,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6877fc28b6983953621d5baa820e4f027d41f9417a7695e17885efb8e8a6111d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.070612 kubelet[2819]: E1013 05:27:45.070571 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6877fc28b6983953621d5baa820e4f027d41f9417a7695e17885efb8e8a6111d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.070683 kubelet[2819]: E1013 05:27:45.070649 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6877fc28b6983953621d5baa820e4f027d41f9417a7695e17885efb8e8a6111d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" Oct 13 05:27:45.070726 kubelet[2819]: E1013 05:27:45.070680 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6877fc28b6983953621d5baa820e4f027d41f9417a7695e17885efb8e8a6111d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" Oct 13 05:27:45.070804 kubelet[2819]: E1013 05:27:45.070759 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57ccb6c94f-brhgw_calico-apiserver(1479d4a9-8275-4077-8b49-3a9f1a6a6634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57ccb6c94f-brhgw_calico-apiserver(1479d4a9-8275-4077-8b49-3a9f1a6a6634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6877fc28b6983953621d5baa820e4f027d41f9417a7695e17885efb8e8a6111d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" podUID="1479d4a9-8275-4077-8b49-3a9f1a6a6634" Oct 13 05:27:45.109746 containerd[1614]: time="2025-10-13T05:27:45.109526498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-mm55g,Uid:325a437e-4d94-40e0-ba26-f0ae3ef19e76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc4cdc7bae79bb2e435371de1a04990e082ddc75d2933db81ab04415361d17c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.109961 kubelet[2819]: E1013 05:27:45.109888 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc4cdc7bae79bb2e435371de1a04990e082ddc75d2933db81ab04415361d17c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.110025 kubelet[2819]: E1013 05:27:45.109980 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc4cdc7bae79bb2e435371de1a04990e082ddc75d2933db81ab04415361d17c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" Oct 13 05:27:45.110025 kubelet[2819]: E1013 05:27:45.110006 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dc4cdc7bae79bb2e435371de1a04990e082ddc75d2933db81ab04415361d17c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" Oct 13 05:27:45.110159 kubelet[2819]: E1013 05:27:45.110091 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57ccb6c94f-mm55g_calico-apiserver(325a437e-4d94-40e0-ba26-f0ae3ef19e76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57ccb6c94f-mm55g_calico-apiserver(325a437e-4d94-40e0-ba26-f0ae3ef19e76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dc4cdc7bae79bb2e435371de1a04990e082ddc75d2933db81ab04415361d17c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" podUID="325a437e-4d94-40e0-ba26-f0ae3ef19e76" Oct 13 05:27:45.159935 sshd[4230]: Connection closed by 10.0.0.1 port 41738 Oct 13 05:27:45.160276 sshd-session[4020]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:45.160442 containerd[1614]: time="2025-10-13T05:27:45.160289049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hsmgj,Uid:190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a442efdfe00ea704169ec35d13c0ae54a8a05faf7d5992609ea5d171a0ee2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.160902 kubelet[2819]: E1013 05:27:45.160778 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a442efdfe00ea704169ec35d13c0ae54a8a05faf7d5992609ea5d171a0ee2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.160902 kubelet[2819]: E1013 05:27:45.160848 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a442efdfe00ea704169ec35d13c0ae54a8a05faf7d5992609ea5d171a0ee2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hsmgj" Oct 13 05:27:45.160902 kubelet[2819]: E1013 05:27:45.160869 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a442efdfe00ea704169ec35d13c0ae54a8a05faf7d5992609ea5d171a0ee2e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hsmgj" Oct 13 05:27:45.161032 kubelet[2819]: E1013 05:27:45.160959 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hsmgj_kube-system(190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hsmgj_kube-system(190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a442efdfe00ea704169ec35d13c0ae54a8a05faf7d5992609ea5d171a0ee2e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hsmgj" podUID="190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8" Oct 13 05:27:45.165119 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:41738.service: Deactivated successfully. Oct 13 05:27:45.167387 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:27:45.168507 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:27:45.170686 systemd-logind[1585]: Removed session 10. Oct 13 05:27:45.263669 containerd[1614]: time="2025-10-13T05:27:45.263591061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5968f678cf-sx9rw,Uid:4a651b8a-51d0-42d7-b97b-59845495263f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efd7dc3454f3417d8ab207410f1ee35cd8496354a05e61e3bc91661204d262e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.264013 kubelet[2819]: E1013 05:27:45.263957 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efd7dc3454f3417d8ab207410f1ee35cd8496354a05e61e3bc91661204d262e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:45.264100 kubelet[2819]: E1013 05:27:45.264035 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efd7dc3454f3417d8ab207410f1ee35cd8496354a05e61e3bc91661204d262e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" Oct 13 05:27:45.264100 kubelet[2819]: E1013 05:27:45.264064 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9efd7dc3454f3417d8ab207410f1ee35cd8496354a05e61e3bc91661204d262e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" Oct 13 05:27:45.264186 kubelet[2819]: E1013 05:27:45.264150 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5968f678cf-sx9rw_calico-system(4a651b8a-51d0-42d7-b97b-59845495263f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5968f678cf-sx9rw_calico-system(4a651b8a-51d0-42d7-b97b-59845495263f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9efd7dc3454f3417d8ab207410f1ee35cd8496354a05e61e3bc91661204d262e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" podUID="4a651b8a-51d0-42d7-b97b-59845495263f" Oct 13 05:27:45.430064 systemd[1]: run-netns-cni\x2db4e773f6\x2dcebf\x2d8ad4\x2dd1ce\x2d3058cb98d2b5.mount: Deactivated successfully. Oct 13 05:27:45.430207 systemd[1]: run-netns-cni\x2d4d19fd10\x2d5fea\x2da2f3\x2d0653\x2df4d869a91932.mount: Deactivated successfully. Oct 13 05:27:45.430287 systemd[1]: run-netns-cni\x2dad0e2ed5\x2d7456\x2d1b01\x2d3a4a\x2d404a88212a79.mount: Deactivated successfully. Oct 13 05:27:45.430375 systemd[1]: run-netns-cni\x2d5d9d7851\x2d2e61\x2da857\x2dcffb\x2dfc62218be3ea.mount: Deactivated successfully. Oct 13 05:27:45.430450 systemd[1]: run-netns-cni\x2d30c52d63\x2d0e1c\x2dc22d\x2d9dc2\x2df4f54b676c94.mount: Deactivated successfully. Oct 13 05:27:45.950822 containerd[1614]: time="2025-10-13T05:27:45.950737137Z" level=info msg="Container 7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:46.163635 containerd[1614]: time="2025-10-13T05:27:46.163578769Z" level=info msg="CreateContainer within sandbox \"1939bab4798314c6300b360f523836f45a449ad7d238fc42a4347d09832a5c85\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4\"" Oct 13 05:27:46.164214 containerd[1614]: time="2025-10-13T05:27:46.164186938Z" level=info msg="StartContainer for \"7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4\"" Oct 13 05:27:46.166062 containerd[1614]: time="2025-10-13T05:27:46.166035841Z" level=info msg="connecting to shim 7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4" address="unix:///run/containerd/s/b801a82be18108bd86f8f3ca93ae1e31d3b6ff01cc088da493c70d0fdf5cf901" protocol=ttrpc version=3 Oct 13 05:27:46.241240 systemd[1]: Started cri-containerd-7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4.scope - libcontainer container 7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4. Oct 13 05:27:46.336948 containerd[1614]: time="2025-10-13T05:27:46.336856566Z" level=info msg="StartContainer for \"7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4\" returns successfully" Oct 13 05:27:46.397368 containerd[1614]: time="2025-10-13T05:27:46.397317399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hkddf,Uid:b0c63581-1952-443f-8230-1187b1b3acad,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:46.477167 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:27:46.478196 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:27:46.630555 containerd[1614]: time="2025-10-13T05:27:46.630487084Z" level=error msg="Failed to destroy network for sandbox \"62bbc21fc59fd43ad37a255e4d9016771d931931cb48c521741b97183000f01c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:46.637125 systemd[1]: run-netns-cni\x2d19718f1b\x2d6941\x2d8581\x2d06cf\x2de298aeb46956.mount: Deactivated successfully. Oct 13 05:27:46.701640 containerd[1614]: time="2025-10-13T05:27:46.701547561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hkddf,Uid:b0c63581-1952-443f-8230-1187b1b3acad,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bbc21fc59fd43ad37a255e4d9016771d931931cb48c521741b97183000f01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:46.701956 kubelet[2819]: E1013 05:27:46.701887 2819 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bbc21fc59fd43ad37a255e4d9016771d931931cb48c521741b97183000f01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:46.702323 kubelet[2819]: E1013 05:27:46.701982 2819 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bbc21fc59fd43ad37a255e4d9016771d931931cb48c521741b97183000f01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:46.702323 kubelet[2819]: E1013 05:27:46.702004 2819 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62bbc21fc59fd43ad37a255e4d9016771d931931cb48c521741b97183000f01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-hkddf" Oct 13 05:27:46.702323 kubelet[2819]: E1013 05:27:46.702061 2819 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-hkddf_calico-system(b0c63581-1952-443f-8230-1187b1b3acad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-hkddf_calico-system(b0c63581-1952-443f-8230-1187b1b3acad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62bbc21fc59fd43ad37a255e4d9016771d931931cb48c521741b97183000f01c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-hkddf" podUID="b0c63581-1952-443f-8230-1187b1b3acad" Oct 13 05:27:46.785271 containerd[1614]: time="2025-10-13T05:27:46.785219157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4\" id:\"c479e767cae39af934740ff93f9f8e8a579c7d3e7941a94cc8eb91aca9d810b9\" pid:4389 exit_status:1 exited_at:{seconds:1760333266 nanos:784827230}" Oct 13 05:27:47.132514 kubelet[2819]: I1013 05:27:47.132407 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-22mlb" podStartSLOduration=3.684873977 podStartE2EDuration="46.132381243s" podCreationTimestamp="2025-10-13 05:27:01 +0000 UTC" firstStartedPulling="2025-10-13 05:27:02.31336919 +0000 UTC m=+28.861961103" lastFinishedPulling="2025-10-13 05:27:44.760876456 +0000 UTC m=+71.309468369" observedRunningTime="2025-10-13 05:27:46.935477827 +0000 UTC m=+73.484069740" watchObservedRunningTime="2025-10-13 05:27:47.132381243 +0000 UTC m=+73.680973146" Oct 13 05:27:47.216217 kubelet[2819]: I1013 05:27:47.216107 2819 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwxg9\" (UniqueName: \"kubernetes.io/projected/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-kube-api-access-zwxg9\") pod \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\" (UID: \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\") " Oct 13 05:27:47.216217 kubelet[2819]: I1013 05:27:47.216158 2819 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-ca-bundle\") pod \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\" (UID: \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\") " Oct 13 05:27:47.216217 kubelet[2819]: I1013 05:27:47.216180 2819 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-backend-key-pair\") pod \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\" (UID: \"4acb0fa4-ab06-4079-9ed1-c9b406f95ac2\") " Oct 13 05:27:47.217745 kubelet[2819]: I1013 05:27:47.217702 2819 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4acb0fa4-ab06-4079-9ed1-c9b406f95ac2" (UID: "4acb0fa4-ab06-4079-9ed1-c9b406f95ac2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:27:47.222139 kubelet[2819]: I1013 05:27:47.222058 2819 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4acb0fa4-ab06-4079-9ed1-c9b406f95ac2" (UID: "4acb0fa4-ab06-4079-9ed1-c9b406f95ac2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:27:47.223065 kubelet[2819]: I1013 05:27:47.222996 2819 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-kube-api-access-zwxg9" (OuterVolumeSpecName: "kube-api-access-zwxg9") pod "4acb0fa4-ab06-4079-9ed1-c9b406f95ac2" (UID: "4acb0fa4-ab06-4079-9ed1-c9b406f95ac2"). InnerVolumeSpecName "kube-api-access-zwxg9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:27:47.224398 systemd[1]: var-lib-kubelet-pods-4acb0fa4\x2dab06\x2d4079\x2d9ed1\x2dc9b406f95ac2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzwxg9.mount: Deactivated successfully. Oct 13 05:27:47.224590 systemd[1]: var-lib-kubelet-pods-4acb0fa4\x2dab06\x2d4079\x2d9ed1\x2dc9b406f95ac2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:27:47.317514 kubelet[2819]: I1013 05:27:47.317464 2819 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zwxg9\" (UniqueName: \"kubernetes.io/projected/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-kube-api-access-zwxg9\") on node \"localhost\" DevicePath \"\"" Oct 13 05:27:47.317514 kubelet[2819]: I1013 05:27:47.317501 2819 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:27:47.317514 kubelet[2819]: I1013 05:27:47.317524 2819 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:27:47.614899 systemd[1]: Removed slice kubepods-besteffort-pod4acb0fa4_ab06_4079_9ed1_c9b406f95ac2.slice - libcontainer container kubepods-besteffort-pod4acb0fa4_ab06_4079_9ed1_c9b406f95ac2.slice. Oct 13 05:27:47.686995 systemd[1]: Created slice kubepods-besteffort-podfd76585b_cc99_4e01_a3d3_ce1d7deb3d1c.slice - libcontainer container kubepods-besteffort-podfd76585b_cc99_4e01_a3d3_ce1d7deb3d1c.slice. Oct 13 05:27:47.715969 containerd[1614]: time="2025-10-13T05:27:47.715888443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4\" id:\"2a8808f3f364cd83a21fcf577a4f404f185265680d49177f46654cba8e72a33f\" pid:4436 exit_status:1 exited_at:{seconds:1760333267 nanos:715522176}" Oct 13 05:27:47.720156 kubelet[2819]: I1013 05:27:47.720115 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c-whisker-ca-bundle\") pod \"whisker-5579d757c9-d7xv8\" (UID: \"fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c\") " pod="calico-system/whisker-5579d757c9-d7xv8" Oct 13 05:27:47.720500 kubelet[2819]: I1013 05:27:47.720166 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5jkb\" (UniqueName: \"kubernetes.io/projected/fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c-kube-api-access-n5jkb\") pod \"whisker-5579d757c9-d7xv8\" (UID: \"fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c\") " pod="calico-system/whisker-5579d757c9-d7xv8" Oct 13 05:27:47.720500 kubelet[2819]: I1013 05:27:47.720224 2819 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c-whisker-backend-key-pair\") pod \"whisker-5579d757c9-d7xv8\" (UID: \"fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c\") " pod="calico-system/whisker-5579d757c9-d7xv8" Oct 13 05:27:48.335365 containerd[1614]: time="2025-10-13T05:27:48.335300105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5579d757c9-d7xv8,Uid:fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:48.378317 kubelet[2819]: I1013 05:27:48.378249 2819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4acb0fa4-ab06-4079-9ed1-c9b406f95ac2" path="/var/lib/kubelet/pods/4acb0fa4-ab06-4079-9ed1-c9b406f95ac2/volumes" Oct 13 05:27:48.908682 systemd-networkd[1513]: vxlan.calico: Link UP Oct 13 05:27:48.908696 systemd-networkd[1513]: vxlan.calico: Gained carrier Oct 13 05:27:50.174882 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:54370.service - OpenSSH per-connection server daemon (10.0.0.1:54370). Oct 13 05:27:50.260145 sshd[4677]: Accepted publickey for core from 10.0.0.1 port 54370 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:50.261854 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:50.266714 systemd-logind[1585]: New session 11 of user core. Oct 13 05:27:50.274084 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:27:50.492265 sshd[4680]: Connection closed by 10.0.0.1 port 54370 Oct 13 05:27:50.493205 sshd-session[4677]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:50.498565 systemd-networkd[1513]: calibfc94147ec6: Link UP Oct 13 05:27:50.498849 systemd-networkd[1513]: calibfc94147ec6: Gained carrier Oct 13 05:27:50.498906 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:54370.service: Deactivated successfully. Oct 13 05:27:50.502006 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:27:50.503084 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:27:50.506457 systemd-logind[1585]: Removed session 11. Oct 13 05:27:50.519428 containerd[1614]: 2025-10-13 05:27:48.506 [INFO][4547] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:27:50.519428 containerd[1614]: 2025-10-13 05:27:48.578 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5579d757c9--d7xv8-eth0 whisker-5579d757c9- calico-system fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c 1090 0 2025-10-13 05:27:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5579d757c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5579d757c9-d7xv8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibfc94147ec6 [] [] }} ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-" Oct 13 05:27:50.519428 containerd[1614]: 2025-10-13 05:27:48.579 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" Oct 13 05:27:50.519428 containerd[1614]: 2025-10-13 05:27:49.812 [INFO][4593] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" HandleID="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Workload="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:49.814 [INFO][4593] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" HandleID="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Workload="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000be3e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5579d757c9-d7xv8", "timestamp":"2025-10-13 05:27:49.812473271 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:49.814 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:49.819 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:49.819 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:50.041 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" host="localhost" Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:50.089 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:50.093 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:50.095 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:50.097 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:50.520191 containerd[1614]: 2025-10-13 05:27:50.097 [INFO][4593] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" host="localhost" Oct 13 05:27:50.520506 containerd[1614]: 2025-10-13 05:27:50.099 [INFO][4593] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2 Oct 13 05:27:50.520506 containerd[1614]: 2025-10-13 05:27:50.205 [INFO][4593] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" host="localhost" Oct 13 05:27:50.520506 containerd[1614]: 2025-10-13 05:27:50.487 [INFO][4593] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" host="localhost" Oct 13 05:27:50.520506 containerd[1614]: 2025-10-13 05:27:50.487 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" host="localhost" Oct 13 05:27:50.520506 containerd[1614]: 2025-10-13 05:27:50.487 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:50.520506 containerd[1614]: 2025-10-13 05:27:50.487 [INFO][4593] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" HandleID="k8s-pod-network.605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Workload="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" Oct 13 05:27:50.520660 containerd[1614]: 2025-10-13 05:27:50.491 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5579d757c9--d7xv8-eth0", GenerateName:"whisker-5579d757c9-", Namespace:"calico-system", SelfLink:"", UID:"fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5579d757c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5579d757c9-d7xv8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibfc94147ec6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:50.520660 containerd[1614]: 2025-10-13 05:27:50.491 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" Oct 13 05:27:50.520760 containerd[1614]: 2025-10-13 05:27:50.492 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfc94147ec6 ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" Oct 13 05:27:50.520760 containerd[1614]: 2025-10-13 05:27:50.500 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" Oct 13 05:27:50.520824 containerd[1614]: 2025-10-13 05:27:50.501 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5579d757c9--d7xv8-eth0", GenerateName:"whisker-5579d757c9-", Namespace:"calico-system", SelfLink:"", UID:"fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5579d757c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2", Pod:"whisker-5579d757c9-d7xv8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibfc94147ec6", MAC:"42:1d:10:0a:26:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:50.520893 containerd[1614]: 2025-10-13 05:27:50.514 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" Namespace="calico-system" Pod="whisker-5579d757c9-d7xv8" WorkloadEndpoint="localhost-k8s-whisker--5579d757c9--d7xv8-eth0" Oct 13 05:27:50.667883 containerd[1614]: time="2025-10-13T05:27:50.667815284Z" level=info msg="connecting to shim 605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2" address="unix:///run/containerd/s/c7c4551198da380efe074e5b86ae05b8affb83446c65b1e625ed9216826556bf" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:50.720352 systemd[1]: Started cri-containerd-605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2.scope - libcontainer container 605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2. Oct 13 05:27:50.739026 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:50.773528 containerd[1614]: time="2025-10-13T05:27:50.773406645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5579d757c9-d7xv8,Uid:fd76585b-cc99-4e01-a3d3-ce1d7deb3d1c,Namespace:calico-system,Attempt:0,} returns sandbox id \"605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2\"" Oct 13 05:27:50.775670 containerd[1614]: time="2025-10-13T05:27:50.775622971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:27:50.986163 systemd-networkd[1513]: vxlan.calico: Gained IPv6LL Oct 13 05:27:51.946252 systemd-networkd[1513]: calibfc94147ec6: Gained IPv6LL Oct 13 05:27:52.126827 containerd[1614]: time="2025-10-13T05:27:52.126747439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:52.127659 containerd[1614]: time="2025-10-13T05:27:52.127618225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:27:52.129110 containerd[1614]: time="2025-10-13T05:27:52.129067260Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:52.131562 containerd[1614]: time="2025-10-13T05:27:52.131531656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:52.132382 containerd[1614]: time="2025-10-13T05:27:52.132269489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.356608956s" Oct 13 05:27:52.132382 containerd[1614]: time="2025-10-13T05:27:52.132347006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:27:52.137812 containerd[1614]: time="2025-10-13T05:27:52.137745510Z" level=info msg="CreateContainer within sandbox \"605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:27:52.147102 containerd[1614]: time="2025-10-13T05:27:52.147025144Z" level=info msg="Container 37acc58fbfdfe7fb9990f2e1f59f67486e8e0d433b22754b25ec0219ac119f9f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:52.155046 containerd[1614]: time="2025-10-13T05:27:52.154986821Z" level=info msg="CreateContainer within sandbox \"605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"37acc58fbfdfe7fb9990f2e1f59f67486e8e0d433b22754b25ec0219ac119f9f\"" Oct 13 05:27:52.155588 containerd[1614]: time="2025-10-13T05:27:52.155549380Z" level=info msg="StartContainer for \"37acc58fbfdfe7fb9990f2e1f59f67486e8e0d433b22754b25ec0219ac119f9f\"" Oct 13 05:27:52.156687 containerd[1614]: time="2025-10-13T05:27:52.156641567Z" level=info msg="connecting to shim 37acc58fbfdfe7fb9990f2e1f59f67486e8e0d433b22754b25ec0219ac119f9f" address="unix:///run/containerd/s/c7c4551198da380efe074e5b86ae05b8affb83446c65b1e625ed9216826556bf" protocol=ttrpc version=3 Oct 13 05:27:52.183177 systemd[1]: Started cri-containerd-37acc58fbfdfe7fb9990f2e1f59f67486e8e0d433b22754b25ec0219ac119f9f.scope - libcontainer container 37acc58fbfdfe7fb9990f2e1f59f67486e8e0d433b22754b25ec0219ac119f9f. Oct 13 05:27:52.375103 containerd[1614]: time="2025-10-13T05:27:52.375051862Z" level=info msg="StartContainer for \"37acc58fbfdfe7fb9990f2e1f59f67486e8e0d433b22754b25ec0219ac119f9f\" returns successfully" Oct 13 05:27:52.376616 containerd[1614]: time="2025-10-13T05:27:52.376563427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:27:55.374646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303726588.mount: Deactivated successfully. Oct 13 05:27:55.395870 containerd[1614]: time="2025-10-13T05:27:55.395796711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:55.398030 containerd[1614]: time="2025-10-13T05:27:55.396729733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:27:55.398120 containerd[1614]: time="2025-10-13T05:27:55.398092712Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:55.400555 containerd[1614]: time="2025-10-13T05:27:55.400509172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:55.401528 containerd[1614]: time="2025-10-13T05:27:55.401490416Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.024889989s" Oct 13 05:27:55.401528 containerd[1614]: time="2025-10-13T05:27:55.401529490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:27:55.406159 containerd[1614]: time="2025-10-13T05:27:55.406109200Z" level=info msg="CreateContainer within sandbox \"605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:27:55.415178 containerd[1614]: time="2025-10-13T05:27:55.415112673Z" level=info msg="Container 9f827bf1dc16c5e48557be2bd9fd8fcbdcc5860c5f9884f412cb8da927fd7b25: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:55.423570 containerd[1614]: time="2025-10-13T05:27:55.423529622Z" level=info msg="CreateContainer within sandbox \"605a5969d700514d56245fdbf821ed83a76a8d26d99275909df4b37ed05f12e2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9f827bf1dc16c5e48557be2bd9fd8fcbdcc5860c5f9884f412cb8da927fd7b25\"" Oct 13 05:27:55.425179 containerd[1614]: time="2025-10-13T05:27:55.424103081Z" level=info msg="StartContainer for \"9f827bf1dc16c5e48557be2bd9fd8fcbdcc5860c5f9884f412cb8da927fd7b25\"" Oct 13 05:27:55.425333 containerd[1614]: time="2025-10-13T05:27:55.425301267Z" level=info msg="connecting to shim 9f827bf1dc16c5e48557be2bd9fd8fcbdcc5860c5f9884f412cb8da927fd7b25" address="unix:///run/containerd/s/c7c4551198da380efe074e5b86ae05b8affb83446c65b1e625ed9216826556bf" protocol=ttrpc version=3 Oct 13 05:27:55.450098 systemd[1]: Started cri-containerd-9f827bf1dc16c5e48557be2bd9fd8fcbdcc5860c5f9884f412cb8da927fd7b25.scope - libcontainer container 9f827bf1dc16c5e48557be2bd9fd8fcbdcc5860c5f9884f412cb8da927fd7b25. Oct 13 05:27:55.508073 containerd[1614]: time="2025-10-13T05:27:55.508012166Z" level=info msg="StartContainer for \"9f827bf1dc16c5e48557be2bd9fd8fcbdcc5860c5f9884f412cb8da927fd7b25\" returns successfully" Oct 13 05:27:55.516257 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:54406.service - OpenSSH per-connection server daemon (10.0.0.1:54406). Oct 13 05:27:55.699677 sshd[4838]: Accepted publickey for core from 10.0.0.1 port 54406 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:55.702160 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:55.723069 systemd-logind[1585]: New session 12 of user core. Oct 13 05:27:55.734261 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:27:55.790772 kubelet[2819]: I1013 05:27:55.790683 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5579d757c9-d7xv8" podStartSLOduration=4.163521926 podStartE2EDuration="8.790664273s" podCreationTimestamp="2025-10-13 05:27:47 +0000 UTC" firstStartedPulling="2025-10-13 05:27:50.775270902 +0000 UTC m=+77.323862815" lastFinishedPulling="2025-10-13 05:27:55.402413249 +0000 UTC m=+81.951005162" observedRunningTime="2025-10-13 05:27:55.790492256 +0000 UTC m=+82.339084169" watchObservedRunningTime="2025-10-13 05:27:55.790664273 +0000 UTC m=+82.339256186" Oct 13 05:27:56.102964 sshd[4847]: Connection closed by 10.0.0.1 port 54406 Oct 13 05:27:56.103373 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:56.109732 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:54406.service: Deactivated successfully. Oct 13 05:27:56.112431 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:27:56.113487 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:27:56.115727 systemd-logind[1585]: Removed session 12. Oct 13 05:27:57.374108 containerd[1614]: time="2025-10-13T05:27:57.374047229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-mm55g,Uid:325a437e-4d94-40e0-ba26-f0ae3ef19e76,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:57.375796 containerd[1614]: time="2025-10-13T05:27:57.375739553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gggtg,Uid:762f9e88-d9dd-4f94-bef9-b3a498513c70,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:57.521910 systemd-networkd[1513]: calic57e966d9d4: Link UP Oct 13 05:27:57.522597 systemd-networkd[1513]: calic57e966d9d4: Gained carrier Oct 13 05:27:57.541634 containerd[1614]: 2025-10-13 05:27:57.431 [INFO][4870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gggtg-eth0 csi-node-driver- calico-system 762f9e88-d9dd-4f94-bef9-b3a498513c70 790 0 2025-10-13 05:27:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gggtg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic57e966d9d4 [] [] }} ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-" Oct 13 05:27:57.541634 containerd[1614]: 2025-10-13 05:27:57.431 [INFO][4870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-eth0" Oct 13 05:27:57.541634 containerd[1614]: 2025-10-13 05:27:57.468 [INFO][4896] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" HandleID="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Workload="localhost-k8s-csi--node--driver--gggtg-eth0" Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.468 [INFO][4896] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" HandleID="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Workload="localhost-k8s-csi--node--driver--gggtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c13e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gggtg", "timestamp":"2025-10-13 05:27:57.46827168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.468 [INFO][4896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.468 [INFO][4896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.468 [INFO][4896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.479 [INFO][4896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" host="localhost" Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.486 [INFO][4896] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.492 [INFO][4896] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.495 [INFO][4896] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.497 [INFO][4896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:57.541912 containerd[1614]: 2025-10-13 05:27:57.497 [INFO][4896] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" host="localhost" Oct 13 05:27:57.542301 containerd[1614]: 2025-10-13 05:27:57.499 [INFO][4896] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975 Oct 13 05:27:57.542301 containerd[1614]: 2025-10-13 05:27:57.506 [INFO][4896] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" host="localhost" Oct 13 05:27:57.542301 containerd[1614]: 2025-10-13 05:27:57.513 [INFO][4896] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" host="localhost" Oct 13 05:27:57.542301 containerd[1614]: 2025-10-13 05:27:57.513 [INFO][4896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" host="localhost" Oct 13 05:27:57.542301 containerd[1614]: 2025-10-13 05:27:57.513 [INFO][4896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:57.542301 containerd[1614]: 2025-10-13 05:27:57.513 [INFO][4896] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" HandleID="k8s-pod-network.c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Workload="localhost-k8s-csi--node--driver--gggtg-eth0" Oct 13 05:27:57.542478 containerd[1614]: 2025-10-13 05:27:57.517 [INFO][4870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gggtg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"762f9e88-d9dd-4f94-bef9-b3a498513c70", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gggtg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic57e966d9d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:57.542556 containerd[1614]: 2025-10-13 05:27:57.517 [INFO][4870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-eth0" Oct 13 05:27:57.542556 containerd[1614]: 2025-10-13 05:27:57.518 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic57e966d9d4 ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-eth0" Oct 13 05:27:57.542556 containerd[1614]: 2025-10-13 05:27:57.523 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-eth0" Oct 13 05:27:57.542655 containerd[1614]: 2025-10-13 05:27:57.524 [INFO][4870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gggtg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"762f9e88-d9dd-4f94-bef9-b3a498513c70", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975", Pod:"csi-node-driver-gggtg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic57e966d9d4", MAC:"ae:66:09:fd:e8:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:57.542731 containerd[1614]: 2025-10-13 05:27:57.536 [INFO][4870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" Namespace="calico-system" Pod="csi-node-driver-gggtg" WorkloadEndpoint="localhost-k8s-csi--node--driver--gggtg-eth0" Oct 13 05:27:57.576658 containerd[1614]: time="2025-10-13T05:27:57.576578687Z" level=info msg="connecting to shim c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975" address="unix:///run/containerd/s/32d09566991c5be9ba716ac01aab102c5cb005a889cdb126424d496c11110395" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:57.615216 systemd[1]: Started cri-containerd-c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975.scope - libcontainer container c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975. Oct 13 05:27:57.628238 systemd-networkd[1513]: caliec38cad5c4f: Link UP Oct 13 05:27:57.629374 systemd-networkd[1513]: caliec38cad5c4f: Gained carrier Oct 13 05:27:57.645852 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:57.652386 containerd[1614]: 2025-10-13 05:27:57.424 [INFO][4863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0 calico-apiserver-57ccb6c94f- calico-apiserver 325a437e-4d94-40e0-ba26-f0ae3ef19e76 951 0 2025-10-13 05:26:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57ccb6c94f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57ccb6c94f-mm55g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec38cad5c4f [] [] }} ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-" Oct 13 05:27:57.652386 containerd[1614]: 2025-10-13 05:27:57.424 [INFO][4863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:27:57.652386 containerd[1614]: 2025-10-13 05:27:57.469 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.470 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003420c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57ccb6c94f-mm55g", "timestamp":"2025-10-13 05:27:57.469289913 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.470 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.513 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.513 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.580 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" host="localhost" Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.590 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.597 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.599 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.602 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:57.652664 containerd[1614]: 2025-10-13 05:27:57.602 [INFO][4894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" host="localhost" Oct 13 05:27:57.653121 containerd[1614]: 2025-10-13 05:27:57.604 [INFO][4894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5 Oct 13 05:27:57.653121 containerd[1614]: 2025-10-13 05:27:57.610 [INFO][4894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" host="localhost" Oct 13 05:27:57.653121 containerd[1614]: 2025-10-13 05:27:57.617 [INFO][4894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" host="localhost" Oct 13 05:27:57.653121 containerd[1614]: 2025-10-13 05:27:57.618 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" host="localhost" Oct 13 05:27:57.653121 containerd[1614]: 2025-10-13 05:27:57.618 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:57.653121 containerd[1614]: 2025-10-13 05:27:57.618 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:27:57.653322 containerd[1614]: 2025-10-13 05:27:57.623 [INFO][4863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0", GenerateName:"calico-apiserver-57ccb6c94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"325a437e-4d94-40e0-ba26-f0ae3ef19e76", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb6c94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57ccb6c94f-mm55g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec38cad5c4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:57.653438 containerd[1614]: 2025-10-13 05:27:57.625 [INFO][4863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:27:57.653438 containerd[1614]: 2025-10-13 05:27:57.625 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec38cad5c4f ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:27:57.653438 containerd[1614]: 2025-10-13 05:27:57.628 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:27:57.653532 containerd[1614]: 2025-10-13 05:27:57.629 [INFO][4863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0", GenerateName:"calico-apiserver-57ccb6c94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"325a437e-4d94-40e0-ba26-f0ae3ef19e76", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb6c94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5", Pod:"calico-apiserver-57ccb6c94f-mm55g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec38cad5c4f", MAC:"3a:41:d5:8d:79:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:57.653622 containerd[1614]: 2025-10-13 05:27:57.645 [INFO][4863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-mm55g" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:27:57.673151 containerd[1614]: time="2025-10-13T05:27:57.673082104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gggtg,Uid:762f9e88-d9dd-4f94-bef9-b3a498513c70,Namespace:calico-system,Attempt:0,} returns sandbox id \"c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975\"" Oct 13 05:27:57.675387 containerd[1614]: time="2025-10-13T05:27:57.675345812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:27:57.688423 containerd[1614]: time="2025-10-13T05:27:57.688360681Z" level=info msg="connecting to shim af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" address="unix:///run/containerd/s/feae9f96e497f7a6f69cbf8bd556a9ff0670ef9ba2f3abec668bc636950625b5" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:57.726711 systemd[1]: Started cri-containerd-af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5.scope - libcontainer container af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5. Oct 13 05:27:57.756409 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:57.802413 containerd[1614]: time="2025-10-13T05:27:57.802360666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-mm55g,Uid:325a437e-4d94-40e0-ba26-f0ae3ef19e76,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\"" Oct 13 05:27:58.374456 kubelet[2819]: E1013 05:27:58.374402 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:58.511413 kubelet[2819]: E1013 05:27:58.511358 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:58.511853 containerd[1614]: time="2025-10-13T05:27:58.511815599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84qpn,Uid:0d256d4f-6316-488a-8e01-b25dcf74417b,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:58.538081 systemd-networkd[1513]: calic57e966d9d4: Gained IPv6LL Oct 13 05:27:59.077092 systemd-networkd[1513]: calif4891909924: Link UP Oct 13 05:27:59.077884 systemd-networkd[1513]: calif4891909924: Gained carrier Oct 13 05:27:59.178411 systemd-networkd[1513]: caliec38cad5c4f: Gained IPv6LL Oct 13 05:27:59.267500 containerd[1614]: 2025-10-13 05:27:58.916 [INFO][5020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--84qpn-eth0 coredns-66bc5c9577- kube-system 0d256d4f-6316-488a-8e01-b25dcf74417b 947 0 2025-10-13 05:26:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-84qpn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4891909924 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-" Oct 13 05:27:59.267500 containerd[1614]: 2025-10-13 05:27:58.916 [INFO][5020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" Oct 13 05:27:59.267500 containerd[1614]: 2025-10-13 05:27:58.947 [INFO][5034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" HandleID="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Workload="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.947 [INFO][5034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" HandleID="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Workload="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-84qpn", "timestamp":"2025-10-13 05:27:58.947566278 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.947 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.947 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.947 [INFO][5034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.956 [INFO][5034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" host="localhost" Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.963 [INFO][5034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.970 [INFO][5034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.972 [INFO][5034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.974 [INFO][5034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:59.267847 containerd[1614]: 2025-10-13 05:27:58.975 [INFO][5034] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" host="localhost" Oct 13 05:27:59.268203 containerd[1614]: 2025-10-13 05:27:58.976 [INFO][5034] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e Oct 13 05:27:59.268203 containerd[1614]: 2025-10-13 05:27:59.037 [INFO][5034] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" host="localhost" Oct 13 05:27:59.268203 containerd[1614]: 2025-10-13 05:27:59.071 [INFO][5034] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" host="localhost" Oct 13 05:27:59.268203 containerd[1614]: 2025-10-13 05:27:59.071 [INFO][5034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" host="localhost" Oct 13 05:27:59.268203 containerd[1614]: 2025-10-13 05:27:59.071 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:59.268203 containerd[1614]: 2025-10-13 05:27:59.071 [INFO][5034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" HandleID="k8s-pod-network.72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Workload="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" Oct 13 05:27:59.268402 containerd[1614]: 2025-10-13 05:27:59.074 [INFO][5020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--84qpn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0d256d4f-6316-488a-8e01-b25dcf74417b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-84qpn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4891909924", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:59.268402 containerd[1614]: 2025-10-13 05:27:59.074 [INFO][5020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" Oct 13 05:27:59.268402 containerd[1614]: 2025-10-13 05:27:59.075 [INFO][5020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4891909924 ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" Oct 13 05:27:59.268402 containerd[1614]: 2025-10-13 05:27:59.078 [INFO][5020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" Oct 13 05:27:59.268402 containerd[1614]: 2025-10-13 05:27:59.079 [INFO][5020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--84qpn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0d256d4f-6316-488a-8e01-b25dcf74417b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e", Pod:"coredns-66bc5c9577-84qpn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4891909924", MAC:"da:9c:b2:f3:f1:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:59.268402 containerd[1614]: 2025-10-13 05:27:59.263 [INFO][5020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" Namespace="kube-system" Pod="coredns-66bc5c9577-84qpn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--84qpn-eth0" Oct 13 05:27:59.371225 kubelet[2819]: E1013 05:27:59.371155 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:00.272110 systemd-networkd[1513]: calif4891909924: Gained IPv6LL Oct 13 05:28:00.366821 containerd[1614]: time="2025-10-13T05:28:00.366753952Z" level=info msg="connecting to shim 72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e" address="unix:///run/containerd/s/48569a4c4e7b21b41037d96764b47a836243a863b6e2d2fc2a66d17744985946" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:00.399081 systemd[1]: Started cri-containerd-72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e.scope - libcontainer container 72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e. Oct 13 05:28:00.414193 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:00.530645 containerd[1614]: time="2025-10-13T05:28:00.530500175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-brhgw,Uid:1479d4a9-8275-4077-8b49-3a9f1a6a6634,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:28:00.623362 containerd[1614]: time="2025-10-13T05:28:00.623291239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-84qpn,Uid:0d256d4f-6316-488a-8e01-b25dcf74417b,Namespace:kube-system,Attempt:0,} returns sandbox id \"72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e\"" Oct 13 05:28:00.624508 kubelet[2819]: E1013 05:28:00.624462 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:00.635476 containerd[1614]: time="2025-10-13T05:28:00.635433364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c64875bd7-9fqrj,Uid:d4cacd58-560a-4299-8f08-e71296199ad7,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:28:00.706778 containerd[1614]: time="2025-10-13T05:28:00.706716325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hkddf,Uid:b0c63581-1952-443f-8230-1187b1b3acad,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:00.707585 containerd[1614]: time="2025-10-13T05:28:00.707534968Z" level=info msg="CreateContainer within sandbox \"72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:28:00.721704 containerd[1614]: time="2025-10-13T05:28:00.720095447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5968f678cf-sx9rw,Uid:4a651b8a-51d0-42d7-b97b-59845495263f,Namespace:calico-system,Attempt:0,}" Oct 13 05:28:00.729953 kubelet[2819]: E1013 05:28:00.729152 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:00.730108 containerd[1614]: time="2025-10-13T05:28:00.729799967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hsmgj,Uid:190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8,Namespace:kube-system,Attempt:0,}" Oct 13 05:28:00.853248 containerd[1614]: time="2025-10-13T05:28:00.853142280Z" level=info msg="Container b46bf26951ee85d9e429b15c5b8b082d4f3a4947327f4a09e7a9374b43af904c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:00.861721 containerd[1614]: time="2025-10-13T05:28:00.861662503Z" level=info msg="CreateContainer within sandbox \"72620fb3d778ebba024c98b140583deae55cab6429e99e191db8107c98182f7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b46bf26951ee85d9e429b15c5b8b082d4f3a4947327f4a09e7a9374b43af904c\"" Oct 13 05:28:00.863949 containerd[1614]: time="2025-10-13T05:28:00.862725701Z" level=info msg="StartContainer for \"b46bf26951ee85d9e429b15c5b8b082d4f3a4947327f4a09e7a9374b43af904c\"" Oct 13 05:28:00.863949 containerd[1614]: time="2025-10-13T05:28:00.863793886Z" level=info msg="connecting to shim b46bf26951ee85d9e429b15c5b8b082d4f3a4947327f4a09e7a9374b43af904c" address="unix:///run/containerd/s/48569a4c4e7b21b41037d96764b47a836243a863b6e2d2fc2a66d17744985946" protocol=ttrpc version=3 Oct 13 05:28:00.901285 systemd[1]: Started cri-containerd-b46bf26951ee85d9e429b15c5b8b082d4f3a4947327f4a09e7a9374b43af904c.scope - libcontainer container b46bf26951ee85d9e429b15c5b8b082d4f3a4947327f4a09e7a9374b43af904c. Oct 13 05:28:01.041715 systemd-networkd[1513]: cali98b80b969b0: Link UP Oct 13 05:28:01.047023 systemd-networkd[1513]: cali98b80b969b0: Gained carrier Oct 13 05:28:01.061299 containerd[1614]: time="2025-10-13T05:28:01.061211891Z" level=info msg="StartContainer for \"b46bf26951ee85d9e429b15c5b8b082d4f3a4947327f4a09e7a9374b43af904c\" returns successfully" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.857 [INFO][5111] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0 calico-apiserver-5c64875bd7- calico-apiserver d4cacd58-560a-4299-8f08-e71296199ad7 949 0 2025-10-13 05:26:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c64875bd7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c64875bd7-9fqrj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali98b80b969b0 [] [] }} ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.858 [INFO][5111] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.951 [INFO][5179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" HandleID="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Workload="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.951 [INFO][5179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" HandleID="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Workload="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c64875bd7-9fqrj", "timestamp":"2025-10-13 05:28:00.951288253 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.952 [INFO][5179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.952 [INFO][5179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.952 [INFO][5179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.967 [INFO][5179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.978 [INFO][5179] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.990 [INFO][5179] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.994 [INFO][5179] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.999 [INFO][5179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:00.999 [INFO][5179] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:01.009 [INFO][5179] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14 Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:01.018 [INFO][5179] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:01.029 [INFO][5179] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:01.029 [INFO][5179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" host="localhost" Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:01.029 [INFO][5179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:01.083289 containerd[1614]: 2025-10-13 05:28:01.029 [INFO][5179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" HandleID="k8s-pod-network.30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Workload="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" Oct 13 05:28:01.084982 containerd[1614]: 2025-10-13 05:28:01.036 [INFO][5111] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0", GenerateName:"calico-apiserver-5c64875bd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4cacd58-560a-4299-8f08-e71296199ad7", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c64875bd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c64875bd7-9fqrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98b80b969b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.084982 containerd[1614]: 2025-10-13 05:28:01.036 [INFO][5111] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" Oct 13 05:28:01.084982 containerd[1614]: 2025-10-13 05:28:01.036 [INFO][5111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98b80b969b0 ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" Oct 13 05:28:01.084982 containerd[1614]: 2025-10-13 05:28:01.052 [INFO][5111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" Oct 13 05:28:01.084982 containerd[1614]: 2025-10-13 05:28:01.052 [INFO][5111] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0", GenerateName:"calico-apiserver-5c64875bd7-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4cacd58-560a-4299-8f08-e71296199ad7", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c64875bd7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14", Pod:"calico-apiserver-5c64875bd7-9fqrj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98b80b969b0", MAC:"ce:1f:b7:40:ad:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.084982 containerd[1614]: 2025-10-13 05:28:01.078 [INFO][5111] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" Namespace="calico-apiserver" Pod="calico-apiserver-5c64875bd7-9fqrj" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c64875bd7--9fqrj-eth0" Oct 13 05:28:01.121815 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:43686.service - OpenSSH per-connection server daemon (10.0.0.1:43686). Oct 13 05:28:01.143298 containerd[1614]: time="2025-10-13T05:28:01.142725744Z" level=info msg="connecting to shim 30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14" address="unix:///run/containerd/s/bf77696ea41f3abda06042e8fdfff865b1d0afb6c4694269189d5bd345559639" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:01.154095 systemd-networkd[1513]: cali1102f098561: Link UP Oct 13 05:28:01.154519 systemd-networkd[1513]: cali1102f098561: Gained carrier Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:00.863 [INFO][5100] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0 calico-apiserver-57ccb6c94f- calico-apiserver 1479d4a9-8275-4077-8b49-3a9f1a6a6634 952 0 2025-10-13 05:26:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57ccb6c94f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57ccb6c94f-brhgw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1102f098561 [] [] }} ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:00.863 [INFO][5100] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:00.969 [INFO][5193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" HandleID="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:00.969 [INFO][5193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" HandleID="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57ccb6c94f-brhgw", "timestamp":"2025-10-13 05:28:00.969058104 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:00.969 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.030 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.031 [INFO][5193] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.071 [INFO][5193] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.081 [INFO][5193] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.098 [INFO][5193] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.102 [INFO][5193] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.109 [INFO][5193] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.109 [INFO][5193] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.116 [INFO][5193] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.126 [INFO][5193] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.141 [INFO][5193] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.141 [INFO][5193] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" host="localhost" Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.141 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:01.179465 containerd[1614]: 2025-10-13 05:28:01.141 [INFO][5193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" HandleID="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:01.180236 containerd[1614]: 2025-10-13 05:28:01.147 [INFO][5100] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0", GenerateName:"calico-apiserver-57ccb6c94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1479d4a9-8275-4077-8b49-3a9f1a6a6634", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb6c94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57ccb6c94f-brhgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1102f098561", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.180236 containerd[1614]: 2025-10-13 05:28:01.147 [INFO][5100] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:01.180236 containerd[1614]: 2025-10-13 05:28:01.147 [INFO][5100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1102f098561 ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:01.180236 containerd[1614]: 2025-10-13 05:28:01.151 [INFO][5100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:01.180236 containerd[1614]: 2025-10-13 05:28:01.152 [INFO][5100] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0", GenerateName:"calico-apiserver-57ccb6c94f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1479d4a9-8275-4077-8b49-3a9f1a6a6634", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57ccb6c94f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a", Pod:"calico-apiserver-57ccb6c94f-brhgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1102f098561", MAC:"5a:b5:75:62:ab:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.180236 containerd[1614]: 2025-10-13 05:28:01.174 [INFO][5100] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Namespace="calico-apiserver" Pod="calico-apiserver-57ccb6c94f-brhgw" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:01.202169 systemd[1]: Started cri-containerd-30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14.scope - libcontainer container 30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14. Oct 13 05:28:01.240222 containerd[1614]: time="2025-10-13T05:28:01.239880602Z" level=info msg="connecting to shim 2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" address="unix:///run/containerd/s/a877d3941cbb29a350d9556c7be231b065a03b994833b93efa807d5c5c3ac04c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:01.252975 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:01.259176 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 43686 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:01.262826 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:01.274055 systemd-logind[1585]: New session 13 of user core. Oct 13 05:28:01.279434 systemd-networkd[1513]: cali982c04110b1: Link UP Oct 13 05:28:01.281051 systemd-networkd[1513]: cali982c04110b1: Gained carrier Oct 13 05:28:01.281168 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:28:01.307222 systemd[1]: Started cri-containerd-2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a.scope - libcontainer container 2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a. Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:00.882 [INFO][5145] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0 calico-kube-controllers-5968f678cf- calico-system 4a651b8a-51d0-42d7-b97b-59845495263f 948 0 2025-10-13 05:27:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5968f678cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5968f678cf-sx9rw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali982c04110b1 [] [] }} ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:00.882 [INFO][5145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:00.969 [INFO][5204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" HandleID="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Workload="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:00.971 [INFO][5204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" HandleID="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Workload="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5968f678cf-sx9rw", "timestamp":"2025-10-13 05:28:00.967722911 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:00.971 [INFO][5204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.141 [INFO][5204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.142 [INFO][5204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.176 [INFO][5204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.192 [INFO][5204] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.208 [INFO][5204] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.213 [INFO][5204] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.224 [INFO][5204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.225 [INFO][5204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.228 [INFO][5204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577 Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.239 [INFO][5204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.258 [INFO][5204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.266 [INFO][5204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" host="localhost" Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.266 [INFO][5204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:01.321359 containerd[1614]: 2025-10-13 05:28:01.266 [INFO][5204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" HandleID="k8s-pod-network.cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Workload="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" Oct 13 05:28:01.334893 containerd[1614]: 2025-10-13 05:28:01.272 [INFO][5145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0", GenerateName:"calico-kube-controllers-5968f678cf-", Namespace:"calico-system", SelfLink:"", UID:"4a651b8a-51d0-42d7-b97b-59845495263f", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5968f678cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5968f678cf-sx9rw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali982c04110b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.334893 containerd[1614]: 2025-10-13 05:28:01.272 [INFO][5145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" Oct 13 05:28:01.334893 containerd[1614]: 2025-10-13 05:28:01.272 [INFO][5145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali982c04110b1 ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" Oct 13 05:28:01.334893 containerd[1614]: 2025-10-13 05:28:01.285 [INFO][5145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" Oct 13 05:28:01.334893 containerd[1614]: 2025-10-13 05:28:01.286 [INFO][5145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0", GenerateName:"calico-kube-controllers-5968f678cf-", Namespace:"calico-system", SelfLink:"", UID:"4a651b8a-51d0-42d7-b97b-59845495263f", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5968f678cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577", Pod:"calico-kube-controllers-5968f678cf-sx9rw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali982c04110b1", MAC:"82:d4:88:14:4c:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.334893 containerd[1614]: 2025-10-13 05:28:01.313 [INFO][5145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" Namespace="calico-system" Pod="calico-kube-controllers-5968f678cf-sx9rw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5968f678cf--sx9rw-eth0" Oct 13 05:28:01.395734 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:01.601875 containerd[1614]: time="2025-10-13T05:28:01.601804544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c64875bd7-9fqrj,Uid:d4cacd58-560a-4299-8f08-e71296199ad7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14\"" Oct 13 05:28:01.635628 sshd[5343]: Connection closed by 10.0.0.1 port 43686 Oct 13 05:28:01.634813 systemd-networkd[1513]: calif3ee1aefc8a: Link UP Oct 13 05:28:01.633201 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:01.637239 systemd-networkd[1513]: calif3ee1aefc8a: Gained carrier Oct 13 05:28:01.643590 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:43686.service: Deactivated successfully. Oct 13 05:28:01.646376 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:28:01.647563 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:28:01.649496 systemd-logind[1585]: Removed session 13. Oct 13 05:28:01.663055 kubelet[2819]: E1013 05:28:01.662963 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:00.852 [INFO][5124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--854f97d977--hkddf-eth0 goldmane-854f97d977- calico-system b0c63581-1952-443f-8230-1187b1b3acad 950 0 2025-10-13 05:27:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-854f97d977-hkddf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif3ee1aefc8a [] [] }} ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:00.854 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-eth0" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:00.974 [INFO][5177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" HandleID="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Workload="localhost-k8s-goldmane--854f97d977--hkddf-eth0" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:00.974 [INFO][5177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" HandleID="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Workload="localhost-k8s-goldmane--854f97d977--hkddf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139830), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-854f97d977-hkddf", "timestamp":"2025-10-13 05:28:00.97450631 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:00.974 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.266 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.266 [INFO][5177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.313 [INFO][5177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.338 [INFO][5177] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.352 [INFO][5177] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.359 [INFO][5177] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.363 [INFO][5177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.363 [INFO][5177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.366 [INFO][5177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025 Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.421 [INFO][5177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.619 [INFO][5177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.620 [INFO][5177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" host="localhost" Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.620 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:01.698539 containerd[1614]: 2025-10-13 05:28:01.621 [INFO][5177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" HandleID="k8s-pod-network.5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Workload="localhost-k8s-goldmane--854f97d977--hkddf-eth0" Oct 13 05:28:01.699437 containerd[1614]: 2025-10-13 05:28:01.628 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--hkddf-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"b0c63581-1952-443f-8230-1187b1b3acad", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-854f97d977-hkddf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif3ee1aefc8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.699437 containerd[1614]: 2025-10-13 05:28:01.628 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-eth0" Oct 13 05:28:01.699437 containerd[1614]: 2025-10-13 05:28:01.628 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3ee1aefc8a ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-eth0" Oct 13 05:28:01.699437 containerd[1614]: 2025-10-13 05:28:01.632 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-eth0" Oct 13 05:28:01.699437 containerd[1614]: 2025-10-13 05:28:01.632 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--hkddf-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"b0c63581-1952-443f-8230-1187b1b3acad", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025", Pod:"goldmane-854f97d977-hkddf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif3ee1aefc8a", MAC:"d2:36:bb:c1:c0:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.699437 containerd[1614]: 2025-10-13 05:28:01.695 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" Namespace="calico-system" Pod="goldmane-854f97d977-hkddf" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--hkddf-eth0" Oct 13 05:28:01.789676 containerd[1614]: time="2025-10-13T05:28:01.789572824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57ccb6c94f-brhgw,Uid:1479d4a9-8275-4077-8b49-3a9f1a6a6634,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\"" Oct 13 05:28:01.795712 kubelet[2819]: I1013 05:28:01.795590 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-84qpn" podStartSLOduration=81.795571482 podStartE2EDuration="1m21.795571482s" podCreationTimestamp="2025-10-13 05:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:28:01.795337638 +0000 UTC m=+88.343929551" watchObservedRunningTime="2025-10-13 05:28:01.795571482 +0000 UTC m=+88.344163395" Oct 13 05:28:01.807585 containerd[1614]: time="2025-10-13T05:28:01.807511888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:01.815375 containerd[1614]: time="2025-10-13T05:28:01.815247760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:28:01.818215 containerd[1614]: time="2025-10-13T05:28:01.818142120Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:01.826735 containerd[1614]: time="2025-10-13T05:28:01.826673611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:01.827438 containerd[1614]: time="2025-10-13T05:28:01.827399608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 4.1520128s" Oct 13 05:28:01.827438 containerd[1614]: time="2025-10-13T05:28:01.827433603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:28:01.832619 containerd[1614]: time="2025-10-13T05:28:01.832326983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:28:01.841935 systemd-networkd[1513]: calia0d1c13ab38: Link UP Oct 13 05:28:01.843455 containerd[1614]: time="2025-10-13T05:28:01.843224452Z" level=info msg="connecting to shim 5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025" address="unix:///run/containerd/s/a8663a58dc7eda0f570ac322d4679d8577e7a215ef7980b184205f380df0e297" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:01.845097 systemd-networkd[1513]: calia0d1c13ab38: Gained carrier Oct 13 05:28:01.846104 containerd[1614]: time="2025-10-13T05:28:01.846042437Z" level=info msg="CreateContainer within sandbox \"c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:00.939 [INFO][5157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--hsmgj-eth0 coredns-66bc5c9577- kube-system 190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8 954 0 2025-10-13 05:26:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-hsmgj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia0d1c13ab38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:00.940 [INFO][5157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.005 [INFO][5223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" HandleID="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Workload="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.005 [INFO][5223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" HandleID="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Workload="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000223b60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-hsmgj", "timestamp":"2025-10-13 05:28:01.005284076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.005 [INFO][5223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.620 [INFO][5223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.621 [INFO][5223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.695 [INFO][5223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.799 [INFO][5223] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.807 [INFO][5223] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.810 [INFO][5223] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.813 [INFO][5223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.813 [INFO][5223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.815 [INFO][5223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.819 [INFO][5223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.831 [INFO][5223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.831 [INFO][5223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" host="localhost" Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.831 [INFO][5223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:01.864864 containerd[1614]: 2025-10-13 05:28:01.831 [INFO][5223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" HandleID="k8s-pod-network.b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Workload="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" Oct 13 05:28:01.865513 containerd[1614]: 2025-10-13 05:28:01.836 [INFO][5157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--hsmgj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-hsmgj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0d1c13ab38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.865513 containerd[1614]: 2025-10-13 05:28:01.836 [INFO][5157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" Oct 13 05:28:01.865513 containerd[1614]: 2025-10-13 05:28:01.837 [INFO][5157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0d1c13ab38 ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" Oct 13 05:28:01.865513 containerd[1614]: 2025-10-13 05:28:01.846 [INFO][5157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" Oct 13 05:28:01.865513 containerd[1614]: 2025-10-13 05:28:01.846 [INFO][5157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--hsmgj-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d", Pod:"coredns-66bc5c9577-hsmgj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia0d1c13ab38", MAC:"c2:0d:84:26:c1:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:28:01.865513 containerd[1614]: 2025-10-13 05:28:01.859 [INFO][5157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" Namespace="kube-system" Pod="coredns-66bc5c9577-hsmgj" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hsmgj-eth0" Oct 13 05:28:01.878130 containerd[1614]: time="2025-10-13T05:28:01.878077195Z" level=info msg="connecting to shim cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577" address="unix:///run/containerd/s/c2b3289a64e7fd346adbe96cb994b8c3267fce8050d322db75aa6617231b00ab" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:01.895904 containerd[1614]: time="2025-10-13T05:28:01.895838713Z" level=info msg="Container 4461dad34664604d1788a5bab44b9eb345849cafbed5751c79c060efad6ae7d8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:01.912314 systemd[1]: Started cri-containerd-5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025.scope - libcontainer container 5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025. Oct 13 05:28:01.919786 containerd[1614]: time="2025-10-13T05:28:01.919721751Z" level=info msg="CreateContainer within sandbox \"c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4461dad34664604d1788a5bab44b9eb345849cafbed5751c79c060efad6ae7d8\"" Oct 13 05:28:01.925465 containerd[1614]: time="2025-10-13T05:28:01.925336670Z" level=info msg="StartContainer for \"4461dad34664604d1788a5bab44b9eb345849cafbed5751c79c060efad6ae7d8\"" Oct 13 05:28:01.927445 systemd[1]: Started cri-containerd-cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577.scope - libcontainer container cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577. Oct 13 05:28:01.931045 containerd[1614]: time="2025-10-13T05:28:01.931008127Z" level=info msg="connecting to shim 4461dad34664604d1788a5bab44b9eb345849cafbed5751c79c060efad6ae7d8" address="unix:///run/containerd/s/32d09566991c5be9ba716ac01aab102c5cb005a889cdb126424d496c11110395" protocol=ttrpc version=3 Oct 13 05:28:01.950435 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:01.954038 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:01.966505 containerd[1614]: time="2025-10-13T05:28:01.965995095Z" level=info msg="connecting to shim b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d" address="unix:///run/containerd/s/3f805d80a99ac8ef27e871dbcf270c9f04191021ea77966ff5842ca29ddb6d3f" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:28:01.985138 systemd[1]: Started cri-containerd-4461dad34664604d1788a5bab44b9eb345849cafbed5751c79c060efad6ae7d8.scope - libcontainer container 4461dad34664604d1788a5bab44b9eb345849cafbed5751c79c060efad6ae7d8. Oct 13 05:28:02.017201 systemd[1]: Started cri-containerd-b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d.scope - libcontainer container b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d. Oct 13 05:28:02.043357 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:28:02.046754 containerd[1614]: time="2025-10-13T05:28:02.046512989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-hkddf,Uid:b0c63581-1952-443f-8230-1187b1b3acad,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025\"" Oct 13 05:28:02.047466 containerd[1614]: time="2025-10-13T05:28:02.047424337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5968f678cf-sx9rw,Uid:4a651b8a-51d0-42d7-b97b-59845495263f,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577\"" Oct 13 05:28:02.079568 containerd[1614]: time="2025-10-13T05:28:02.079507663Z" level=info msg="StartContainer for \"4461dad34664604d1788a5bab44b9eb345849cafbed5751c79c060efad6ae7d8\" returns successfully" Oct 13 05:28:02.100702 containerd[1614]: time="2025-10-13T05:28:02.100598987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hsmgj,Uid:190de19b-7d74-4b26-8ad5-9c7bdc2cd0a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d\"" Oct 13 05:28:02.102389 kubelet[2819]: E1013 05:28:02.102249 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:02.107752 containerd[1614]: time="2025-10-13T05:28:02.107691726Z" level=info msg="CreateContainer within sandbox \"b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:28:02.120074 containerd[1614]: time="2025-10-13T05:28:02.120010735Z" level=info msg="Container d5e7177c87c558135841a57a260f7c20472f74f27db1ca9faea14924479be3fc: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:02.194767 containerd[1614]: time="2025-10-13T05:28:02.194621533Z" level=info msg="CreateContainer within sandbox \"b1d1d40bfc4c1192ed516662d4314b1e00f9176b91d37337ca942672a0a68f8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d5e7177c87c558135841a57a260f7c20472f74f27db1ca9faea14924479be3fc\"" Oct 13 05:28:02.195860 containerd[1614]: time="2025-10-13T05:28:02.195832159Z" level=info msg="StartContainer for \"d5e7177c87c558135841a57a260f7c20472f74f27db1ca9faea14924479be3fc\"" Oct 13 05:28:02.197857 containerd[1614]: time="2025-10-13T05:28:02.197802185Z" level=info msg="connecting to shim d5e7177c87c558135841a57a260f7c20472f74f27db1ca9faea14924479be3fc" address="unix:///run/containerd/s/3f805d80a99ac8ef27e871dbcf270c9f04191021ea77966ff5842ca29ddb6d3f" protocol=ttrpc version=3 Oct 13 05:28:02.231174 systemd[1]: Started cri-containerd-d5e7177c87c558135841a57a260f7c20472f74f27db1ca9faea14924479be3fc.scope - libcontainer container d5e7177c87c558135841a57a260f7c20472f74f27db1ca9faea14924479be3fc. Oct 13 05:28:02.250216 systemd-networkd[1513]: cali98b80b969b0: Gained IPv6LL Oct 13 05:28:02.350269 containerd[1614]: time="2025-10-13T05:28:02.350212646Z" level=info msg="StartContainer for \"d5e7177c87c558135841a57a260f7c20472f74f27db1ca9faea14924479be3fc\" returns successfully" Oct 13 05:28:02.366471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051353257.mount: Deactivated successfully. Oct 13 05:28:02.379121 systemd-networkd[1513]: cali1102f098561: Gained IPv6LL Oct 13 05:28:02.677418 kubelet[2819]: E1013 05:28:02.677319 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:02.678047 kubelet[2819]: E1013 05:28:02.677374 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:02.890217 systemd-networkd[1513]: calif3ee1aefc8a: Gained IPv6LL Oct 13 05:28:03.018098 systemd-networkd[1513]: calia0d1c13ab38: Gained IPv6LL Oct 13 05:28:03.146232 systemd-networkd[1513]: cali982c04110b1: Gained IPv6LL Oct 13 05:28:03.370576 kubelet[2819]: E1013 05:28:03.370508 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:03.424473 kubelet[2819]: I1013 05:28:03.424377 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hsmgj" podStartSLOduration=83.424355382 podStartE2EDuration="1m23.424355382s" podCreationTimestamp="2025-10-13 05:26:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:28:03.133822897 +0000 UTC m=+89.682414800" watchObservedRunningTime="2025-10-13 05:28:03.424355382 +0000 UTC m=+89.972947295" Oct 13 05:28:03.679830 kubelet[2819]: E1013 05:28:03.679703 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:03.679830 kubelet[2819]: E1013 05:28:03.679732 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:04.682598 kubelet[2819]: E1013 05:28:04.682538 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:05.685411 kubelet[2819]: E1013 05:28:05.685326 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:06.654472 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:43726.service - OpenSSH per-connection server daemon (10.0.0.1:43726). Oct 13 05:28:06.751023 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 43726 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:06.752860 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:06.758517 systemd-logind[1585]: New session 14 of user core. Oct 13 05:28:06.773096 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:28:06.895026 containerd[1614]: time="2025-10-13T05:28:06.894952826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:06.898504 containerd[1614]: time="2025-10-13T05:28:06.898430956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:28:06.899910 containerd[1614]: time="2025-10-13T05:28:06.899847490Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:06.903748 containerd[1614]: time="2025-10-13T05:28:06.902576180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:06.903748 containerd[1614]: time="2025-10-13T05:28:06.903550326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.071170282s" Oct 13 05:28:06.903748 containerd[1614]: time="2025-10-13T05:28:06.903596593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:28:06.905278 containerd[1614]: time="2025-10-13T05:28:06.905151538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:28:06.911128 containerd[1614]: time="2025-10-13T05:28:06.911078748Z" level=info msg="CreateContainer within sandbox \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:28:06.925999 containerd[1614]: time="2025-10-13T05:28:06.925910187Z" level=info msg="Container b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:06.936857 containerd[1614]: time="2025-10-13T05:28:06.936800921Z" level=info msg="CreateContainer within sandbox \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\"" Oct 13 05:28:06.939944 containerd[1614]: time="2025-10-13T05:28:06.938105442Z" level=info msg="StartContainer for \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\"" Oct 13 05:28:06.939944 containerd[1614]: time="2025-10-13T05:28:06.939512519Z" level=info msg="connecting to shim b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110" address="unix:///run/containerd/s/feae9f96e497f7a6f69cbf8bd556a9ff0670ef9ba2f3abec668bc636950625b5" protocol=ttrpc version=3 Oct 13 05:28:06.942909 sshd[5627]: Connection closed by 10.0.0.1 port 43726 Oct 13 05:28:06.944241 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:06.959856 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:43726.service: Deactivated successfully. Oct 13 05:28:06.962900 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:28:06.965860 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:28:06.968706 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:33738.service - OpenSSH per-connection server daemon (10.0.0.1:33738). Oct 13 05:28:06.970840 systemd-logind[1585]: Removed session 14. Oct 13 05:28:07.019290 systemd[1]: Started cri-containerd-b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110.scope - libcontainer container b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110. Oct 13 05:28:07.035133 sshd[5649]: Accepted publickey for core from 10.0.0.1 port 33738 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:07.036826 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:07.042058 systemd-logind[1585]: New session 15 of user core. Oct 13 05:28:07.051042 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:28:07.541894 containerd[1614]: time="2025-10-13T05:28:07.541832687Z" level=info msg="StartContainer for \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" returns successfully" Oct 13 05:28:07.857521 sshd[5670]: Connection closed by 10.0.0.1 port 33738 Oct 13 05:28:07.857986 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:07.873619 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:33738.service: Deactivated successfully. Oct 13 05:28:07.876469 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:28:07.877477 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:28:07.882322 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Oct 13 05:28:07.883123 systemd-logind[1585]: Removed session 15. Oct 13 05:28:07.944939 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:07.946678 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:07.951395 systemd-logind[1585]: New session 16 of user core. Oct 13 05:28:07.969076 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:28:08.225289 kubelet[2819]: I1013 05:28:08.225081 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57ccb6c94f-mm55g" podStartSLOduration=61.124365352 podStartE2EDuration="1m10.225059544s" podCreationTimestamp="2025-10-13 05:26:58 +0000 UTC" firstStartedPulling="2025-10-13 05:27:57.804165694 +0000 UTC m=+84.352757607" lastFinishedPulling="2025-10-13 05:28:06.904859886 +0000 UTC m=+93.453451799" observedRunningTime="2025-10-13 05:28:08.224390727 +0000 UTC m=+94.772982641" watchObservedRunningTime="2025-10-13 05:28:08.225059544 +0000 UTC m=+94.773651457" Oct 13 05:28:08.258750 sshd[5702]: Connection closed by 10.0.0.1 port 33768 Oct 13 05:28:08.259852 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:08.267048 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:33768.service: Deactivated successfully. Oct 13 05:28:08.271125 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:28:08.276742 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:28:08.279895 systemd-logind[1585]: Removed session 16. Oct 13 05:28:08.280670 containerd[1614]: time="2025-10-13T05:28:08.280601332Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:08.286003 containerd[1614]: time="2025-10-13T05:28:08.285899375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:28:08.288417 containerd[1614]: time="2025-10-13T05:28:08.288372308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.383159353s" Oct 13 05:28:08.288521 containerd[1614]: time="2025-10-13T05:28:08.288422473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:28:08.294289 containerd[1614]: time="2025-10-13T05:28:08.294233048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:28:08.299925 containerd[1614]: time="2025-10-13T05:28:08.299482620Z" level=info msg="CreateContainer within sandbox \"30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:28:08.316388 containerd[1614]: time="2025-10-13T05:28:08.316325678Z" level=info msg="Container 2729a6332e796f51e3781d5248a35ca8ce6c34c50dadb38c5efac0cf4dd82c07: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:08.323524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938579457.mount: Deactivated successfully. Oct 13 05:28:08.326575 containerd[1614]: time="2025-10-13T05:28:08.326491821Z" level=info msg="CreateContainer within sandbox \"30ae6acc34a9ddd7d4e4dcb077f6de12c4b509fee2f6beee596d2e7ef9900b14\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2729a6332e796f51e3781d5248a35ca8ce6c34c50dadb38c5efac0cf4dd82c07\"" Oct 13 05:28:08.329649 containerd[1614]: time="2025-10-13T05:28:08.329612210Z" level=info msg="StartContainer for \"2729a6332e796f51e3781d5248a35ca8ce6c34c50dadb38c5efac0cf4dd82c07\"" Oct 13 05:28:08.334435 containerd[1614]: time="2025-10-13T05:28:08.334374309Z" level=info msg="connecting to shim 2729a6332e796f51e3781d5248a35ca8ce6c34c50dadb38c5efac0cf4dd82c07" address="unix:///run/containerd/s/bf77696ea41f3abda06042e8fdfff865b1d0afb6c4694269189d5bd345559639" protocol=ttrpc version=3 Oct 13 05:28:08.373211 systemd[1]: Started cri-containerd-2729a6332e796f51e3781d5248a35ca8ce6c34c50dadb38c5efac0cf4dd82c07.scope - libcontainer container 2729a6332e796f51e3781d5248a35ca8ce6c34c50dadb38c5efac0cf4dd82c07. Oct 13 05:28:08.463199 containerd[1614]: time="2025-10-13T05:28:08.463129531Z" level=info msg="StartContainer for \"2729a6332e796f51e3781d5248a35ca8ce6c34c50dadb38c5efac0cf4dd82c07\" returns successfully" Oct 13 05:28:08.660814 containerd[1614]: time="2025-10-13T05:28:08.660746464Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:08.662157 containerd[1614]: time="2025-10-13T05:28:08.662112422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:28:08.664994 containerd[1614]: time="2025-10-13T05:28:08.664957169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 370.637146ms" Oct 13 05:28:08.665073 containerd[1614]: time="2025-10-13T05:28:08.665047671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:28:08.668498 containerd[1614]: time="2025-10-13T05:28:08.668427471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:28:08.674327 containerd[1614]: time="2025-10-13T05:28:08.674272831Z" level=info msg="CreateContainer within sandbox \"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:28:08.688170 containerd[1614]: time="2025-10-13T05:28:08.688122311Z" level=info msg="Container 3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:08.696605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664590986.mount: Deactivated successfully. Oct 13 05:28:08.702800 kubelet[2819]: I1013 05:28:08.702767 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:08.710027 containerd[1614]: time="2025-10-13T05:28:08.709970998Z" level=info msg="CreateContainer within sandbox \"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\"" Oct 13 05:28:08.711632 containerd[1614]: time="2025-10-13T05:28:08.710950223Z" level=info msg="StartContainer for \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\"" Oct 13 05:28:08.712781 containerd[1614]: time="2025-10-13T05:28:08.712681351Z" level=info msg="connecting to shim 3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b" address="unix:///run/containerd/s/a877d3941cbb29a350d9556c7be231b065a03b994833b93efa807d5c5c3ac04c" protocol=ttrpc version=3 Oct 13 05:28:08.770192 systemd[1]: Started cri-containerd-3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b.scope - libcontainer container 3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b. Oct 13 05:28:09.629160 containerd[1614]: time="2025-10-13T05:28:09.629092042Z" level=info msg="StartContainer for \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" returns successfully" Oct 13 05:28:09.884764 kubelet[2819]: I1013 05:28:09.883738 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c64875bd7-9fqrj" podStartSLOduration=64.194972317 podStartE2EDuration="1m10.883720487s" podCreationTimestamp="2025-10-13 05:26:59 +0000 UTC" firstStartedPulling="2025-10-13 05:28:01.60339504 +0000 UTC m=+88.151986953" lastFinishedPulling="2025-10-13 05:28:08.29214319 +0000 UTC m=+94.840735123" observedRunningTime="2025-10-13 05:28:08.720713022 +0000 UTC m=+95.269304945" watchObservedRunningTime="2025-10-13 05:28:09.883720487 +0000 UTC m=+96.432312420" Oct 13 05:28:09.884764 kubelet[2819]: I1013 05:28:09.884291 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57ccb6c94f-brhgw" podStartSLOduration=65.008963777 podStartE2EDuration="1m11.884286058s" podCreationTimestamp="2025-10-13 05:26:58 +0000 UTC" firstStartedPulling="2025-10-13 05:28:01.791989658 +0000 UTC m=+88.340581571" lastFinishedPulling="2025-10-13 05:28:08.667311939 +0000 UTC m=+95.215903852" observedRunningTime="2025-10-13 05:28:09.883152171 +0000 UTC m=+96.431744084" watchObservedRunningTime="2025-10-13 05:28:09.884286058 +0000 UTC m=+96.432877961" Oct 13 05:28:10.709729 kubelet[2819]: I1013 05:28:10.709685 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:11.178095 kubelet[2819]: I1013 05:28:11.178047 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:11.204069 containerd[1614]: time="2025-10-13T05:28:11.203374664Z" level=info msg="StopContainer for \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" with timeout 30 (s)" Oct 13 05:28:11.214793 containerd[1614]: time="2025-10-13T05:28:11.214742931Z" level=info msg="Stop container \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" with signal terminated" Oct 13 05:28:11.252390 systemd[1]: cri-containerd-b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110.scope: Deactivated successfully. Oct 13 05:28:11.255396 containerd[1614]: time="2025-10-13T05:28:11.255308124Z" level=info msg="received exit event container_id:\"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" id:\"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" pid:5663 exit_status:1 exited_at:{seconds:1760333291 nanos:254858192}" Oct 13 05:28:11.255396 containerd[1614]: time="2025-10-13T05:28:11.255381463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" id:\"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" pid:5663 exit_status:1 exited_at:{seconds:1760333291 nanos:254858192}" Oct 13 05:28:11.319093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110-rootfs.mount: Deactivated successfully. Oct 13 05:28:11.460688 containerd[1614]: time="2025-10-13T05:28:11.460507708Z" level=info msg="StopContainer for \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" returns successfully" Oct 13 05:28:11.463730 containerd[1614]: time="2025-10-13T05:28:11.463679070Z" level=info msg="StopPodSandbox for \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\"" Oct 13 05:28:11.464321 containerd[1614]: time="2025-10-13T05:28:11.463762349Z" level=info msg="Container to stop \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:28:11.479032 systemd[1]: cri-containerd-af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5.scope: Deactivated successfully. Oct 13 05:28:11.484417 containerd[1614]: time="2025-10-13T05:28:11.484150870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" id:\"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" pid:5006 exit_status:137 exited_at:{seconds:1760333291 nanos:483298636}" Oct 13 05:28:11.519444 containerd[1614]: time="2025-10-13T05:28:11.519390172Z" level=info msg="shim disconnected" id=af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5 namespace=k8s.io Oct 13 05:28:11.519444 containerd[1614]: time="2025-10-13T05:28:11.519419296Z" level=warning msg="cleaning up after shim disconnected" id=af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5 namespace=k8s.io Oct 13 05:28:11.522627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5-rootfs.mount: Deactivated successfully. Oct 13 05:28:11.522900 containerd[1614]: time="2025-10-13T05:28:11.519428103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:28:11.886371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623684997.mount: Deactivated successfully. Oct 13 05:28:12.062847 containerd[1614]: time="2025-10-13T05:28:12.062787870Z" level=info msg="received exit event sandbox_id:\"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" exit_status:137 exited_at:{seconds:1760333291 nanos:483298636}" Oct 13 05:28:12.066044 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5-shm.mount: Deactivated successfully. Oct 13 05:28:12.208690 systemd-networkd[1513]: caliec38cad5c4f: Link DOWN Oct 13 05:28:12.209951 systemd-networkd[1513]: caliec38cad5c4f: Lost carrier Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.205 [INFO][5877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.206 [INFO][5877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" iface="eth0" netns="/var/run/netns/cni-6c152008-1541-02b4-8bec-6595d84c5b24" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.207 [INFO][5877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" iface="eth0" netns="/var/run/netns/cni-6c152008-1541-02b4-8bec-6595d84c5b24" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.216 [INFO][5877] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" after=9.52146ms iface="eth0" netns="/var/run/netns/cni-6c152008-1541-02b4-8bec-6595d84c5b24" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.216 [INFO][5877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.216 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.240 [INFO][5888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.240 [INFO][5888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.241 [INFO][5888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.674 [INFO][5888] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.675 [INFO][5888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.676 [INFO][5888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:12.684991 containerd[1614]: 2025-10-13 05:28:12.680 [INFO][5877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:12.687960 systemd[1]: run-netns-cni\x2d6c152008\x2d1541\x2d02b4\x2d8bec\x2d6595d84c5b24.mount: Deactivated successfully. Oct 13 05:28:12.730205 containerd[1614]: time="2025-10-13T05:28:12.693849659Z" level=info msg="TearDown network for sandbox \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" successfully" Oct 13 05:28:12.730205 containerd[1614]: time="2025-10-13T05:28:12.693893873Z" level=info msg="StopPodSandbox for \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" returns successfully" Oct 13 05:28:12.730322 kubelet[2819]: I1013 05:28:12.721127 2819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:13.285129 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:33770.service - OpenSSH per-connection server daemon (10.0.0.1:33770). Oct 13 05:28:13.371420 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 33770 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:13.373089 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:13.380096 systemd-logind[1585]: New session 17 of user core. Oct 13 05:28:13.390109 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:28:13.571705 sshd[5912]: Connection closed by 10.0.0.1 port 33770 Oct 13 05:28:13.571996 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:13.577044 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:33770.service: Deactivated successfully. Oct 13 05:28:13.579326 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:28:13.580093 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:28:13.581285 systemd-logind[1585]: Removed session 17. Oct 13 05:28:13.916556 kubelet[2819]: I1013 05:28:13.916495 2819 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvp64\" (UniqueName: \"kubernetes.io/projected/325a437e-4d94-40e0-ba26-f0ae3ef19e76-kube-api-access-vvp64\") pod \"325a437e-4d94-40e0-ba26-f0ae3ef19e76\" (UID: \"325a437e-4d94-40e0-ba26-f0ae3ef19e76\") " Oct 13 05:28:13.916556 kubelet[2819]: I1013 05:28:13.916557 2819 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/325a437e-4d94-40e0-ba26-f0ae3ef19e76-calico-apiserver-certs\") pod \"325a437e-4d94-40e0-ba26-f0ae3ef19e76\" (UID: \"325a437e-4d94-40e0-ba26-f0ae3ef19e76\") " Oct 13 05:28:13.921126 kubelet[2819]: I1013 05:28:13.921066 2819 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325a437e-4d94-40e0-ba26-f0ae3ef19e76-kube-api-access-vvp64" (OuterVolumeSpecName: "kube-api-access-vvp64") pod "325a437e-4d94-40e0-ba26-f0ae3ef19e76" (UID: "325a437e-4d94-40e0-ba26-f0ae3ef19e76"). InnerVolumeSpecName "kube-api-access-vvp64". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:28:13.921126 kubelet[2819]: I1013 05:28:13.921103 2819 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/325a437e-4d94-40e0-ba26-f0ae3ef19e76-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "325a437e-4d94-40e0-ba26-f0ae3ef19e76" (UID: "325a437e-4d94-40e0-ba26-f0ae3ef19e76"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:28:13.923073 systemd[1]: var-lib-kubelet-pods-325a437e\x2d4d94\x2d40e0\x2dba26\x2df0ae3ef19e76-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvvp64.mount: Deactivated successfully. Oct 13 05:28:13.923233 systemd[1]: var-lib-kubelet-pods-325a437e\x2d4d94\x2d40e0\x2dba26\x2df0ae3ef19e76-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:28:14.017803 kubelet[2819]: I1013 05:28:14.017745 2819 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/325a437e-4d94-40e0-ba26-f0ae3ef19e76-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Oct 13 05:28:14.017803 kubelet[2819]: I1013 05:28:14.017785 2819 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vvp64\" (UniqueName: \"kubernetes.io/projected/325a437e-4d94-40e0-ba26-f0ae3ef19e76-kube-api-access-vvp64\") on node \"localhost\" DevicePath \"\"" Oct 13 05:28:14.371503 kubelet[2819]: E1013 05:28:14.371448 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:14.380247 systemd[1]: Removed slice kubepods-besteffort-pod325a437e_4d94_40e0_ba26_f0ae3ef19e76.slice - libcontainer container kubepods-besteffort-pod325a437e_4d94_40e0_ba26_f0ae3ef19e76.slice. Oct 13 05:28:14.574142 containerd[1614]: time="2025-10-13T05:28:14.574061822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:14.575290 containerd[1614]: time="2025-10-13T05:28:14.575237506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:28:14.577246 containerd[1614]: time="2025-10-13T05:28:14.577214848Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:14.580504 containerd[1614]: time="2025-10-13T05:28:14.580437444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:14.581219 containerd[1614]: time="2025-10-13T05:28:14.581187904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.912217676s" Oct 13 05:28:14.581219 containerd[1614]: time="2025-10-13T05:28:14.581221587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:28:14.582408 containerd[1614]: time="2025-10-13T05:28:14.582158230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:28:14.586396 containerd[1614]: time="2025-10-13T05:28:14.586342846Z" level=info msg="CreateContainer within sandbox \"5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:28:14.597451 containerd[1614]: time="2025-10-13T05:28:14.597383313Z" level=info msg="Container 5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:14.608893 containerd[1614]: time="2025-10-13T05:28:14.608839528Z" level=info msg="CreateContainer within sandbox \"5d0b205ffa4a374d2819296b8488c3e76ddbf79fc7e901c1f0c8c4e58f9c0025\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb\"" Oct 13 05:28:14.609555 containerd[1614]: time="2025-10-13T05:28:14.609524853Z" level=info msg="StartContainer for \"5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb\"" Oct 13 05:28:14.610741 containerd[1614]: time="2025-10-13T05:28:14.610711990Z" level=info msg="connecting to shim 5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb" address="unix:///run/containerd/s/a8663a58dc7eda0f570ac322d4679d8577e7a215ef7980b184205f380df0e297" protocol=ttrpc version=3 Oct 13 05:28:14.634089 systemd[1]: Started cri-containerd-5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb.scope - libcontainer container 5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb. Oct 13 05:28:14.684307 containerd[1614]: time="2025-10-13T05:28:14.684261216Z" level=info msg="StartContainer for \"5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb\" returns successfully" Oct 13 05:28:14.748111 kubelet[2819]: I1013 05:28:14.747265 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-hkddf" podStartSLOduration=61.222768329 podStartE2EDuration="1m13.74723823s" podCreationTimestamp="2025-10-13 05:27:01 +0000 UTC" firstStartedPulling="2025-10-13 05:28:02.057591566 +0000 UTC m=+88.606183479" lastFinishedPulling="2025-10-13 05:28:14.582061467 +0000 UTC m=+101.130653380" observedRunningTime="2025-10-13 05:28:14.7447162 +0000 UTC m=+101.293308113" watchObservedRunningTime="2025-10-13 05:28:14.74723823 +0000 UTC m=+101.295830145" Oct 13 05:28:14.841110 containerd[1614]: time="2025-10-13T05:28:14.841035123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb\" id:\"071ff219e2bbcf69488eb87c5d6f7a3f4893168f0d08d32eb1c2c31599e20f4f\" pid:5977 exit_status:1 exited_at:{seconds:1760333294 nanos:839900637}" Oct 13 05:28:15.828899 containerd[1614]: time="2025-10-13T05:28:15.828828527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb\" id:\"43f7e10374546cf6bbf92fe1a5be3714a685bd3343222a8b99d5b918a7454e11\" pid:6007 exited_at:{seconds:1760333295 nanos:828436516}" Oct 13 05:28:16.373660 kubelet[2819]: I1013 05:28:16.373597 2819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="325a437e-4d94-40e0-ba26-f0ae3ef19e76" path="/var/lib/kubelet/pods/325a437e-4d94-40e0-ba26-f0ae3ef19e76/volumes" Oct 13 05:28:17.742871 containerd[1614]: time="2025-10-13T05:28:17.742348944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:17.743723 containerd[1614]: time="2025-10-13T05:28:17.743662288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:28:17.748407 containerd[1614]: time="2025-10-13T05:28:17.748354390Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:17.750793 containerd[1614]: time="2025-10-13T05:28:17.750712340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:17.751291 containerd[1614]: time="2025-10-13T05:28:17.751245418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.169042202s" Oct 13 05:28:17.751422 containerd[1614]: time="2025-10-13T05:28:17.751294871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:28:17.753716 containerd[1614]: time="2025-10-13T05:28:17.753454616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:28:17.771501 containerd[1614]: time="2025-10-13T05:28:17.771437060Z" level=info msg="CreateContainer within sandbox \"cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:28:17.786993 containerd[1614]: time="2025-10-13T05:28:17.785288033Z" level=info msg="Container 28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:17.804540 containerd[1614]: time="2025-10-13T05:28:17.804480266Z" level=info msg="CreateContainer within sandbox \"cd51d1747c62d1e73ca8cdc58c31cb09564d1f1e1a89bdc6ad468585d7233577\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43\"" Oct 13 05:28:17.805129 containerd[1614]: time="2025-10-13T05:28:17.805102724Z" level=info msg="StartContainer for \"28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43\"" Oct 13 05:28:17.808454 containerd[1614]: time="2025-10-13T05:28:17.808411400Z" level=info msg="connecting to shim 28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43" address="unix:///run/containerd/s/c2b3289a64e7fd346adbe96cb994b8c3267fce8050d322db75aa6617231b00ab" protocol=ttrpc version=3 Oct 13 05:28:17.840875 containerd[1614]: time="2025-10-13T05:28:17.840819575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4\" id:\"681fcb693f24b25f3e4c865f0aef7e25b743e91e913d54bded8bae0551c57834\" pid:6038 exit_status:1 exited_at:{seconds:1760333297 nanos:840418927}" Oct 13 05:28:17.842218 systemd[1]: Started cri-containerd-28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43.scope - libcontainer container 28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43. Oct 13 05:28:17.901346 containerd[1614]: time="2025-10-13T05:28:17.901292231Z" level=info msg="StartContainer for \"28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43\" returns successfully" Oct 13 05:28:18.590760 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:38086.service - OpenSSH per-connection server daemon (10.0.0.1:38086). Oct 13 05:28:18.695580 sshd[6097]: Accepted publickey for core from 10.0.0.1 port 38086 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:18.697826 sshd-session[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:18.704307 systemd-logind[1585]: New session 18 of user core. Oct 13 05:28:18.716081 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:28:18.759356 kubelet[2819]: I1013 05:28:18.759287 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5968f678cf-sx9rw" podStartSLOduration=61.065581925 podStartE2EDuration="1m16.759269212s" podCreationTimestamp="2025-10-13 05:27:02 +0000 UTC" firstStartedPulling="2025-10-13 05:28:02.059223951 +0000 UTC m=+88.607815864" lastFinishedPulling="2025-10-13 05:28:17.752911238 +0000 UTC m=+104.301503151" observedRunningTime="2025-10-13 05:28:18.758703752 +0000 UTC m=+105.307295665" watchObservedRunningTime="2025-10-13 05:28:18.759269212 +0000 UTC m=+105.307861125" Oct 13 05:28:18.799063 containerd[1614]: time="2025-10-13T05:28:18.798877451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43\" id:\"f1f003ae2e5923ff04bbd75af2fa2ca47210d7d80d8c05af46a002d554d14704\" pid:6113 exited_at:{seconds:1760333298 nanos:798602882}" Oct 13 05:28:19.592133 sshd[6100]: Connection closed by 10.0.0.1 port 38086 Oct 13 05:28:19.594401 sshd-session[6097]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:19.599934 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:38086.service: Deactivated successfully. Oct 13 05:28:19.602851 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:28:19.605496 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:28:19.608285 systemd-logind[1585]: Removed session 18. Oct 13 05:28:19.698524 containerd[1614]: time="2025-10-13T05:28:19.698443711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:19.699457 containerd[1614]: time="2025-10-13T05:28:19.699412793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:28:19.701858 containerd[1614]: time="2025-10-13T05:28:19.701793665Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:19.704611 containerd[1614]: time="2025-10-13T05:28:19.704543493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:19.705667 containerd[1614]: time="2025-10-13T05:28:19.705627282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.952127943s" Oct 13 05:28:19.705745 containerd[1614]: time="2025-10-13T05:28:19.705668961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:28:19.712088 containerd[1614]: time="2025-10-13T05:28:19.712037972Z" level=info msg="CreateContainer within sandbox \"c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:28:19.722947 containerd[1614]: time="2025-10-13T05:28:19.722857476Z" level=info msg="Container f5b7e4e8b1261a932ec6224e570b90c85351e478e551a873219d56c044ff0748: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:19.733631 containerd[1614]: time="2025-10-13T05:28:19.733557084Z" level=info msg="CreateContainer within sandbox \"c54cc99a8a995793cb8659aadfa28e891d2efa11ecc0226599f66165ca8db975\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f5b7e4e8b1261a932ec6224e570b90c85351e478e551a873219d56c044ff0748\"" Oct 13 05:28:19.756769 containerd[1614]: time="2025-10-13T05:28:19.756695117Z" level=info msg="StartContainer for \"f5b7e4e8b1261a932ec6224e570b90c85351e478e551a873219d56c044ff0748\"" Oct 13 05:28:19.759884 containerd[1614]: time="2025-10-13T05:28:19.759835563Z" level=info msg="connecting to shim f5b7e4e8b1261a932ec6224e570b90c85351e478e551a873219d56c044ff0748" address="unix:///run/containerd/s/32d09566991c5be9ba716ac01aab102c5cb005a889cdb126424d496c11110395" protocol=ttrpc version=3 Oct 13 05:28:19.792099 systemd[1]: Started cri-containerd-f5b7e4e8b1261a932ec6224e570b90c85351e478e551a873219d56c044ff0748.scope - libcontainer container f5b7e4e8b1261a932ec6224e570b90c85351e478e551a873219d56c044ff0748. Oct 13 05:28:20.159498 containerd[1614]: time="2025-10-13T05:28:20.159434169Z" level=info msg="StartContainer for \"f5b7e4e8b1261a932ec6224e570b90c85351e478e551a873219d56c044ff0748\" returns successfully" Oct 13 05:28:20.519606 kubelet[2819]: I1013 05:28:20.519463 2819 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:28:20.519606 kubelet[2819]: I1013 05:28:20.519507 2819 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:28:22.280711 kubelet[2819]: I1013 05:28:22.280637 2819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:28:22.356587 kubelet[2819]: I1013 05:28:22.356011 2819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gggtg" podStartSLOduration=58.323969009 podStartE2EDuration="1m20.355988532s" podCreationTimestamp="2025-10-13 05:27:02 +0000 UTC" firstStartedPulling="2025-10-13 05:27:57.674609694 +0000 UTC m=+84.223201607" lastFinishedPulling="2025-10-13 05:28:19.706629217 +0000 UTC m=+106.255221130" observedRunningTime="2025-10-13 05:28:21.015026398 +0000 UTC m=+107.563618331" watchObservedRunningTime="2025-10-13 05:28:22.355988532 +0000 UTC m=+108.904580445" Oct 13 05:28:24.611048 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:38112.service - OpenSSH per-connection server daemon (10.0.0.1:38112). Oct 13 05:28:24.680150 sshd[6178]: Accepted publickey for core from 10.0.0.1 port 38112 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:24.682106 sshd-session[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:24.687079 systemd-logind[1585]: New session 19 of user core. Oct 13 05:28:24.702134 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:28:24.928075 sshd[6181]: Connection closed by 10.0.0.1 port 38112 Oct 13 05:28:24.928459 sshd-session[6178]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:24.933076 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:38112.service: Deactivated successfully. Oct 13 05:28:24.935629 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:28:24.936660 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:28:24.938629 systemd-logind[1585]: Removed session 19. Oct 13 05:28:29.725688 containerd[1614]: time="2025-10-13T05:28:29.725639585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43\" id:\"37ab10c8bd8288655859ebb37f01aaa8f665ba69649113b6b32513529971296c\" pid:6236 exited_at:{seconds:1760333309 nanos:725232085}" Oct 13 05:28:29.783075 containerd[1614]: time="2025-10-13T05:28:29.783013296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb\" id:\"ae7c9336984de9c36f56ca03a8874efc4b2b265352ded39f543b2a09ccb03a46\" pid:6212 exited_at:{seconds:1760333309 nanos:782599995}" Oct 13 05:28:29.941573 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:49698.service - OpenSSH per-connection server daemon (10.0.0.1:49698). Oct 13 05:28:30.020901 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 49698 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:30.023187 sshd-session[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:30.028379 systemd-logind[1585]: New session 20 of user core. Oct 13 05:28:30.041139 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:28:30.255052 sshd[6250]: Connection closed by 10.0.0.1 port 49698 Oct 13 05:28:30.255439 sshd-session[6247]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:30.261135 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:49698.service: Deactivated successfully. Oct 13 05:28:30.263709 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:28:30.264697 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:28:30.266778 systemd-logind[1585]: Removed session 20. Oct 13 05:28:34.370906 kubelet[2819]: I1013 05:28:34.370842 2819 scope.go:117] "RemoveContainer" containerID="b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110" Oct 13 05:28:34.374161 containerd[1614]: time="2025-10-13T05:28:34.373906668Z" level=info msg="RemoveContainer for \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\"" Oct 13 05:28:34.559751 containerd[1614]: time="2025-10-13T05:28:34.559689162Z" level=info msg="RemoveContainer for \"b82d926262128efec61e1ffc04cee1d0110552300811286737c2e586756c2110\" returns successfully" Oct 13 05:28:34.561849 containerd[1614]: time="2025-10-13T05:28:34.561685189Z" level=info msg="StopPodSandbox for \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\"" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.036 [WARNING][6278] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.037 [INFO][6278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.037 [INFO][6278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" iface="eth0" netns="" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.038 [INFO][6278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.038 [INFO][6278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.065 [INFO][6286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.065 [INFO][6286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.065 [INFO][6286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.072 [WARNING][6286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.072 [INFO][6286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.074 [INFO][6286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:35.083703 containerd[1614]: 2025-10-13 05:28:35.079 [INFO][6278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.084815 containerd[1614]: time="2025-10-13T05:28:35.083783774Z" level=info msg="TearDown network for sandbox \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" successfully" Oct 13 05:28:35.084815 containerd[1614]: time="2025-10-13T05:28:35.083812007Z" level=info msg="StopPodSandbox for \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" returns successfully" Oct 13 05:28:35.084815 containerd[1614]: time="2025-10-13T05:28:35.084577572Z" level=info msg="RemovePodSandbox for \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\"" Oct 13 05:28:35.084815 containerd[1614]: time="2025-10-13T05:28:35.084610935Z" level=info msg="Forcibly stopping sandbox \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\"" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.173 [WARNING][6304] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" WorkloadEndpoint="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.173 [INFO][6304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.173 [INFO][6304] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" iface="eth0" netns="" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.173 [INFO][6304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.173 [INFO][6304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.200 [INFO][6313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.200 [INFO][6313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.200 [INFO][6313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.210 [WARNING][6313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.210 [INFO][6313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" HandleID="k8s-pod-network.af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--mm55g-eth0" Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.213 [INFO][6313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:35.220218 containerd[1614]: 2025-10-13 05:28:35.217 [INFO][6304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5" Oct 13 05:28:35.220714 containerd[1614]: time="2025-10-13T05:28:35.220328049Z" level=info msg="TearDown network for sandbox \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" successfully" Oct 13 05:28:35.226698 containerd[1614]: time="2025-10-13T05:28:35.226643699Z" level=info msg="Ensure that sandbox af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5 in task-service has been cleanup successfully" Oct 13 05:28:35.234598 containerd[1614]: time="2025-10-13T05:28:35.234530373Z" level=info msg="RemovePodSandbox \"af68ccc2c98baeb9b7f7e5a1115c2a6d71b4f8e65eddcfada123ca97323573c5\" returns successfully" Oct 13 05:28:35.285402 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:49714.service - OpenSSH per-connection server daemon (10.0.0.1:49714). Oct 13 05:28:35.358802 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 49714 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:35.360889 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:35.366737 systemd-logind[1585]: New session 21 of user core. Oct 13 05:28:35.378110 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:28:35.607391 sshd[6325]: Connection closed by 10.0.0.1 port 49714 Oct 13 05:28:35.608257 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:35.620617 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:49714.service: Deactivated successfully. Oct 13 05:28:35.623685 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:28:35.626591 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:28:35.631658 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:49716.service - OpenSSH per-connection server daemon (10.0.0.1:49716). Oct 13 05:28:35.633270 systemd-logind[1585]: Removed session 21. Oct 13 05:28:35.706219 sshd[6341]: Accepted publickey for core from 10.0.0.1 port 49716 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:35.708280 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:35.714787 systemd-logind[1585]: New session 22 of user core. Oct 13 05:28:35.721196 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:28:36.083947 sshd[6344]: Connection closed by 10.0.0.1 port 49716 Oct 13 05:28:36.085080 sshd-session[6341]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:36.095960 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:49716.service: Deactivated successfully. Oct 13 05:28:36.098689 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:28:36.099641 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:28:36.104066 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:49730.service - OpenSSH per-connection server daemon (10.0.0.1:49730). Oct 13 05:28:36.104993 systemd-logind[1585]: Removed session 22. Oct 13 05:28:36.193757 sshd[6356]: Accepted publickey for core from 10.0.0.1 port 49730 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:36.196068 sshd-session[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:36.202174 systemd-logind[1585]: New session 23 of user core. Oct 13 05:28:36.216150 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:28:36.370952 kubelet[2819]: E1013 05:28:36.370579 2819 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:37.166690 sshd[6359]: Connection closed by 10.0.0.1 port 49730 Oct 13 05:28:37.167565 sshd-session[6356]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:37.178123 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:49730.service: Deactivated successfully. Oct 13 05:28:37.180663 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:28:37.181547 systemd-logind[1585]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:28:37.185752 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:32872.service - OpenSSH per-connection server daemon (10.0.0.1:32872). Oct 13 05:28:37.187187 systemd-logind[1585]: Removed session 23. Oct 13 05:28:37.254506 sshd[6397]: Accepted publickey for core from 10.0.0.1 port 32872 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:37.257014 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:37.263367 systemd-logind[1585]: New session 24 of user core. Oct 13 05:28:37.271148 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:28:37.637683 sshd[6400]: Connection closed by 10.0.0.1 port 32872 Oct 13 05:28:37.639174 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:37.651604 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:32872.service: Deactivated successfully. Oct 13 05:28:37.654823 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:28:37.660103 systemd-logind[1585]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:28:37.663215 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:32876.service - OpenSSH per-connection server daemon (10.0.0.1:32876). Oct 13 05:28:37.664641 systemd-logind[1585]: Removed session 24. Oct 13 05:28:37.732397 sshd[6413]: Accepted publickey for core from 10.0.0.1 port 32876 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:37.734058 sshd-session[6413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:37.739075 systemd-logind[1585]: New session 25 of user core. Oct 13 05:28:37.751054 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:28:37.879124 sshd[6416]: Connection closed by 10.0.0.1 port 32876 Oct 13 05:28:37.880770 sshd-session[6413]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:37.894610 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:32876.service: Deactivated successfully. Oct 13 05:28:37.897079 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:28:37.900486 systemd-logind[1585]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:28:37.902342 systemd-logind[1585]: Removed session 25. Oct 13 05:28:37.913356 containerd[1614]: time="2025-10-13T05:28:37.913295433Z" level=info msg="StopContainer for \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" with timeout 30 (s)" Oct 13 05:28:37.914320 containerd[1614]: time="2025-10-13T05:28:37.914292464Z" level=info msg="Stop container \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" with signal terminated" Oct 13 05:28:37.944509 systemd[1]: cri-containerd-3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b.scope: Deactivated successfully. Oct 13 05:28:37.952426 containerd[1614]: time="2025-10-13T05:28:37.952370489Z" level=info msg="received exit event container_id:\"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" id:\"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" pid:5763 exit_status:1 exited_at:{seconds:1760333317 nanos:951930529}" Oct 13 05:28:37.952747 containerd[1614]: time="2025-10-13T05:28:37.952434249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" id:\"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" pid:5763 exit_status:1 exited_at:{seconds:1760333317 nanos:951930529}" Oct 13 05:28:38.004415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b-rootfs.mount: Deactivated successfully. Oct 13 05:28:38.019499 containerd[1614]: time="2025-10-13T05:28:38.019452804Z" level=info msg="StopContainer for \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" returns successfully" Oct 13 05:28:38.020010 containerd[1614]: time="2025-10-13T05:28:38.019979027Z" level=info msg="StopPodSandbox for \"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\"" Oct 13 05:28:38.020076 containerd[1614]: time="2025-10-13T05:28:38.020041114Z" level=info msg="Container to stop \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:28:38.028276 systemd[1]: cri-containerd-2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a.scope: Deactivated successfully. Oct 13 05:28:38.038763 containerd[1614]: time="2025-10-13T05:28:38.038733564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\" id:\"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\" pid:5350 exit_status:137 exited_at:{seconds:1760333318 nanos:38448046}" Oct 13 05:28:38.068943 containerd[1614]: time="2025-10-13T05:28:38.068871352Z" level=info msg="shim disconnected" id=2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a namespace=k8s.io Oct 13 05:28:38.068943 containerd[1614]: time="2025-10-13T05:28:38.068940873Z" level=warning msg="cleaning up after shim disconnected" id=2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a namespace=k8s.io Oct 13 05:28:38.069820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a-rootfs.mount: Deactivated successfully. Oct 13 05:28:38.092674 containerd[1614]: time="2025-10-13T05:28:38.068951073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:28:38.158610 containerd[1614]: time="2025-10-13T05:28:38.158433908Z" level=info msg="received exit event sandbox_id:\"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\" exit_status:137 exited_at:{seconds:1760333318 nanos:38448046}" Oct 13 05:28:38.161801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a-shm.mount: Deactivated successfully. Oct 13 05:28:38.236074 systemd-networkd[1513]: cali1102f098561: Link DOWN Oct 13 05:28:38.236089 systemd-networkd[1513]: cali1102f098561: Lost carrier Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.232 [INFO][6501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.233 [INFO][6501] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" iface="eth0" netns="/var/run/netns/cni-94179dfc-3f5e-ff14-57f0-8d34c9c054c1" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.234 [INFO][6501] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" iface="eth0" netns="/var/run/netns/cni-94179dfc-3f5e-ff14-57f0-8d34c9c054c1" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.242 [INFO][6501] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" after=8.623872ms iface="eth0" netns="/var/run/netns/cni-94179dfc-3f5e-ff14-57f0-8d34c9c054c1" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.242 [INFO][6501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.242 [INFO][6501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.276 [INFO][6517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" HandleID="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.276 [INFO][6517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.277 [INFO][6517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.317 [INFO][6517] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" HandleID="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.317 [INFO][6517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" HandleID="k8s-pod-network.2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Workload="localhost-k8s-calico--apiserver--57ccb6c94f--brhgw-eth0" Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.319 [INFO][6517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:28:38.327338 containerd[1614]: 2025-10-13 05:28:38.323 [INFO][6501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a" Oct 13 05:28:38.327823 containerd[1614]: time="2025-10-13T05:28:38.327751564Z" level=info msg="TearDown network for sandbox \"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\" successfully" Oct 13 05:28:38.327823 containerd[1614]: time="2025-10-13T05:28:38.327786661Z" level=info msg="StopPodSandbox for \"2ac686ee52964cec111cd00e929d9efeddabc8550b25381a05867dea96e11f3a\" returns successfully" Oct 13 05:28:38.331262 systemd[1]: run-netns-cni\x2d94179dfc\x2d3f5e\x2dff14\x2d57f0\x2d8d34c9c054c1.mount: Deactivated successfully. Oct 13 05:28:38.486729 kubelet[2819]: I1013 05:28:38.486580 2819 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwnhp\" (UniqueName: \"kubernetes.io/projected/1479d4a9-8275-4077-8b49-3a9f1a6a6634-kube-api-access-hwnhp\") pod \"1479d4a9-8275-4077-8b49-3a9f1a6a6634\" (UID: \"1479d4a9-8275-4077-8b49-3a9f1a6a6634\") " Oct 13 05:28:38.486729 kubelet[2819]: I1013 05:28:38.486634 2819 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1479d4a9-8275-4077-8b49-3a9f1a6a6634-calico-apiserver-certs\") pod \"1479d4a9-8275-4077-8b49-3a9f1a6a6634\" (UID: \"1479d4a9-8275-4077-8b49-3a9f1a6a6634\") " Oct 13 05:28:38.492012 kubelet[2819]: I1013 05:28:38.491891 2819 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1479d4a9-8275-4077-8b49-3a9f1a6a6634-kube-api-access-hwnhp" (OuterVolumeSpecName: "kube-api-access-hwnhp") pod "1479d4a9-8275-4077-8b49-3a9f1a6a6634" (UID: "1479d4a9-8275-4077-8b49-3a9f1a6a6634"). InnerVolumeSpecName "kube-api-access-hwnhp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:28:38.494074 kubelet[2819]: I1013 05:28:38.494015 2819 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1479d4a9-8275-4077-8b49-3a9f1a6a6634-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "1479d4a9-8275-4077-8b49-3a9f1a6a6634" (UID: "1479d4a9-8275-4077-8b49-3a9f1a6a6634"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:28:38.494507 systemd[1]: var-lib-kubelet-pods-1479d4a9\x2d8275\x2d4077\x2d8b49\x2d3a9f1a6a6634-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhwnhp.mount: Deactivated successfully. Oct 13 05:28:38.587899 kubelet[2819]: I1013 05:28:38.587821 2819 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hwnhp\" (UniqueName: \"kubernetes.io/projected/1479d4a9-8275-4077-8b49-3a9f1a6a6634-kube-api-access-hwnhp\") on node \"localhost\" DevicePath \"\"" Oct 13 05:28:38.587899 kubelet[2819]: I1013 05:28:38.587860 2819 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1479d4a9-8275-4077-8b49-3a9f1a6a6634-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Oct 13 05:28:38.820272 kubelet[2819]: I1013 05:28:38.820131 2819 scope.go:117] "RemoveContainer" containerID="3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b" Oct 13 05:28:38.823283 containerd[1614]: time="2025-10-13T05:28:38.823145013Z" level=info msg="RemoveContainer for \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\"" Oct 13 05:28:38.831542 systemd[1]: Removed slice kubepods-besteffort-pod1479d4a9_8275_4077_8b49_3a9f1a6a6634.slice - libcontainer container kubepods-besteffort-pod1479d4a9_8275_4077_8b49_3a9f1a6a6634.slice. Oct 13 05:28:39.004891 systemd[1]: var-lib-kubelet-pods-1479d4a9\x2d8275\x2d4077\x2d8b49\x2d3a9f1a6a6634-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:28:39.052616 containerd[1614]: time="2025-10-13T05:28:39.052539001Z" level=info msg="RemoveContainer for \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" returns successfully" Oct 13 05:28:39.053206 kubelet[2819]: I1013 05:28:39.052996 2819 scope.go:117] "RemoveContainer" containerID="3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b" Oct 13 05:28:39.080204 containerd[1614]: time="2025-10-13T05:28:39.053445071Z" level=error msg="ContainerStatus for \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\": not found" Oct 13 05:28:39.086703 kubelet[2819]: E1013 05:28:39.086628 2819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\": not found" containerID="3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b" Oct 13 05:28:39.108289 kubelet[2819]: I1013 05:28:39.086696 2819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b"} err="failed to get container status \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3687ac08310ba87421f47a4cb99f65c3d7d6cfc02aade9cc6d1e40ae0e79619b\": not found" Oct 13 05:28:40.374045 kubelet[2819]: I1013 05:28:40.373982 2819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1479d4a9-8275-4077-8b49-3a9f1a6a6634" path="/var/lib/kubelet/pods/1479d4a9-8275-4077-8b49-3a9f1a6a6634/volumes" Oct 13 05:28:42.894261 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:32886.service - OpenSSH per-connection server daemon (10.0.0.1:32886). Oct 13 05:28:42.954961 sshd[6532]: Accepted publickey for core from 10.0.0.1 port 32886 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:42.957020 sshd-session[6532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:42.961953 systemd-logind[1585]: New session 26 of user core. Oct 13 05:28:42.970108 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:28:43.098439 sshd[6535]: Connection closed by 10.0.0.1 port 32886 Oct 13 05:28:43.098836 sshd-session[6532]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:43.104548 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:32886.service: Deactivated successfully. Oct 13 05:28:43.107107 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:28:43.107989 systemd-logind[1585]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:28:43.109669 systemd-logind[1585]: Removed session 26. Oct 13 05:28:46.038177 containerd[1614]: time="2025-10-13T05:28:46.038117182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5de1553eab50b0cc09da276ce533deb554777a185d164a9ffb34944d70f14cbb\" id:\"0d23ff9e3fe085bfc8be93f3fb3a2f71c5dd6fa6d53fbd6f38acbccbe2537241\" pid:6565 exited_at:{seconds:1760333326 nanos:37737385}" Oct 13 05:28:47.722419 containerd[1614]: time="2025-10-13T05:28:47.721819826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7400f2447d5d8042b4cca65808566e200ab108720e97810d4da42ef44ff880e4\" id:\"1591a705292c2579a0f5393f60dfcdf16512fded8d665cfe07d09e527a130053\" pid:6588 exited_at:{seconds:1760333327 nanos:704648744}" Oct 13 05:28:48.111802 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:40238.service - OpenSSH per-connection server daemon (10.0.0.1:40238). Oct 13 05:28:48.158702 sshd[6601]: Accepted publickey for core from 10.0.0.1 port 40238 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:48.160355 sshd-session[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:48.165455 systemd-logind[1585]: New session 27 of user core. Oct 13 05:28:48.178129 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 05:28:48.319758 sshd[6604]: Connection closed by 10.0.0.1 port 40238 Oct 13 05:28:48.320133 sshd-session[6601]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:48.325568 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:40238.service: Deactivated successfully. Oct 13 05:28:48.328159 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 05:28:48.329224 systemd-logind[1585]: Session 27 logged out. Waiting for processes to exit. Oct 13 05:28:48.330572 systemd-logind[1585]: Removed session 27. Oct 13 05:28:48.797765 containerd[1614]: time="2025-10-13T05:28:48.797692969Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28655b9bf77cc216997ae9b295b84d038d304cb6cdaa449fed8f0107a8342f43\" id:\"9ee22e762508b53a56f3ff539fe357b1f6a42a9df93b4e55a6bd2384ec1c5232\" pid:6631 exited_at:{seconds:1760333328 nanos:797019529}" Oct 13 05:28:53.334743 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:40258.service - OpenSSH per-connection server daemon (10.0.0.1:40258). Oct 13 05:28:53.420166 sshd[6642]: Accepted publickey for core from 10.0.0.1 port 40258 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:53.422130 sshd-session[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:53.428034 systemd-logind[1585]: New session 28 of user core. Oct 13 05:28:53.435083 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 13 05:28:53.743810 sshd[6647]: Connection closed by 10.0.0.1 port 40258 Oct 13 05:28:53.744095 sshd-session[6642]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:53.750665 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:40258.service: Deactivated successfully. Oct 13 05:28:53.752286 systemd-logind[1585]: Session 28 logged out. Waiting for processes to exit. Oct 13 05:28:53.753714 systemd[1]: session-28.scope: Deactivated successfully. Oct 13 05:28:53.756704 systemd-logind[1585]: Removed session 28.