Oct 13 05:25:29.649891 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:25:29.649936 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:25:29.649949 kernel: BIOS-provided physical RAM map: Oct 13 05:25:29.649956 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 13 05:25:29.649972 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 13 05:25:29.649979 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Oct 13 05:25:29.649987 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 13 05:25:29.649994 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Oct 13 05:25:29.650004 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 13 05:25:29.650011 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 13 05:25:29.650020 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 13 05:25:29.650027 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 13 05:25:29.650034 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 13 05:25:29.650041 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 13 05:25:29.650049 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 13 05:25:29.650059 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 13 05:25:29.650069 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:25:29.650076 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:25:29.650083 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:25:29.650091 kernel: NX (Execute Disable) protection: active Oct 13 05:25:29.650098 kernel: APIC: Static calls initialized Oct 13 05:25:29.650106 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Oct 13 05:25:29.650113 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Oct 13 05:25:29.650121 kernel: extended physical RAM map: Oct 13 05:25:29.650130 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Oct 13 05:25:29.650138 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Oct 13 05:25:29.650145 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Oct 13 05:25:29.650153 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Oct 13 05:25:29.650160 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Oct 13 05:25:29.650167 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Oct 13 05:25:29.650175 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Oct 13 05:25:29.650182 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Oct 13 05:25:29.650190 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Oct 13 05:25:29.650197 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Oct 13 05:25:29.650204 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Oct 13 05:25:29.650214 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Oct 13 05:25:29.650221 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Oct 13 05:25:29.650229 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Oct 13 05:25:29.650236 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Oct 13 05:25:29.650244 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Oct 13 05:25:29.650255 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Oct 13 05:25:29.650265 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 13 05:25:29.650272 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:25:29.650280 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:25:29.650288 kernel: efi: EFI v2.7 by EDK II Oct 13 05:25:29.650296 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Oct 13 05:25:29.650303 kernel: random: crng init done Oct 13 05:25:29.650311 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Oct 13 05:25:29.650319 kernel: secureboot: Secure boot enabled Oct 13 05:25:29.650328 kernel: SMBIOS 2.8 present. Oct 13 05:25:29.650336 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 13 05:25:29.650344 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:25:29.650351 kernel: Hypervisor detected: KVM Oct 13 05:25:29.650359 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:25:29.650367 kernel: kvm-clock: using sched offset of 6019070629 cycles Oct 13 05:25:29.650375 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:25:29.650384 kernel: tsc: Detected 2794.746 MHz processor Oct 13 05:25:29.650392 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:25:29.650402 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:25:29.650410 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Oct 13 05:25:29.650418 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 13 05:25:29.650430 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:25:29.650438 kernel: Using GB pages for direct mapping Oct 13 05:25:29.650448 kernel: ACPI: Early table checksum verification disabled Oct 13 05:25:29.650456 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Oct 13 05:25:29.650466 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 13 05:25:29.650475 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.650483 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.650491 kernel: ACPI: FACS 0x000000009BBDD000 000040 Oct 13 05:25:29.650499 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.650526 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.650537 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.650549 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:25:29.650557 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 13 05:25:29.650565 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Oct 13 05:25:29.650573 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Oct 13 05:25:29.650581 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Oct 13 05:25:29.650589 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Oct 13 05:25:29.650597 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Oct 13 05:25:29.650607 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Oct 13 05:25:29.650615 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Oct 13 05:25:29.650623 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Oct 13 05:25:29.650631 kernel: No NUMA configuration found Oct 13 05:25:29.650639 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Oct 13 05:25:29.650647 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Oct 13 05:25:29.650655 kernel: Zone ranges: Oct 13 05:25:29.650663 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:25:29.650674 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Oct 13 05:25:29.650682 kernel: Normal empty Oct 13 05:25:29.650689 kernel: Device empty Oct 13 05:25:29.650697 kernel: Movable zone start for each node Oct 13 05:25:29.650705 kernel: Early memory node ranges Oct 13 05:25:29.650713 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Oct 13 05:25:29.650721 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Oct 13 05:25:29.650729 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Oct 13 05:25:29.650739 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Oct 13 05:25:29.650747 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Oct 13 05:25:29.650755 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Oct 13 05:25:29.650763 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:25:29.650771 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Oct 13 05:25:29.650779 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:25:29.650787 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 13 05:25:29.650797 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 13 05:25:29.650805 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Oct 13 05:25:29.650813 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:25:29.650821 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:25:29.650829 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:25:29.650837 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:25:29.650845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:25:29.650859 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:25:29.650867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:25:29.650875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:25:29.650883 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:25:29.650891 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:25:29.650899 kernel: TSC deadline timer available Oct 13 05:25:29.650907 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:25:29.650917 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:25:29.650925 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:25:29.650940 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:25:29.650950 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:25:29.650965 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:25:29.650975 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:25:29.650985 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:25:29.650993 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:25:29.651019 kernel: kvm-guest: setup PV sched yield Oct 13 05:25:29.651036 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 13 05:25:29.651045 kernel: Booting paravirtualized kernel on KVM Oct 13 05:25:29.651053 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:25:29.651062 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:25:29.651072 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:25:29.651081 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:25:29.651089 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:25:29.651097 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:25:29.651106 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:25:29.651115 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:25:29.651124 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:25:29.651137 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:25:29.651152 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:25:29.651160 kernel: Fallback order for Node 0: 0 Oct 13 05:25:29.651169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Oct 13 05:25:29.651177 kernel: Policy zone: DMA32 Oct 13 05:25:29.651185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:25:29.651194 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:25:29.651205 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:25:29.651213 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:25:29.651222 kernel: Dynamic Preempt: voluntary Oct 13 05:25:29.651230 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:25:29.651239 kernel: rcu: RCU event tracing is enabled. Oct 13 05:25:29.651248 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:25:29.651256 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:25:29.651269 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:25:29.651278 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:25:29.651288 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:25:29.651297 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:25:29.651305 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:25:29.651314 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:25:29.651325 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:25:29.651341 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:25:29.651350 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:25:29.651358 kernel: Console: colour dummy device 80x25 Oct 13 05:25:29.651366 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:25:29.651374 kernel: ACPI: Core revision 20240827 Oct 13 05:25:29.651383 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:25:29.651391 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:25:29.651402 kernel: x2apic enabled Oct 13 05:25:29.651411 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:25:29.651419 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:25:29.651427 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:25:29.651436 kernel: kvm-guest: setup PV IPIs Oct 13 05:25:29.651444 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:25:29.651452 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Oct 13 05:25:29.651464 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Oct 13 05:25:29.651472 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:25:29.651480 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:25:29.651489 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:25:29.651499 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:25:29.651525 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:25:29.651537 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:25:29.651553 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:25:29.651564 kernel: active return thunk: retbleed_return_thunk Oct 13 05:25:29.651575 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:25:29.651586 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:25:29.651595 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:25:29.651603 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:25:29.651612 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:25:29.651623 kernel: active return thunk: srso_return_thunk Oct 13 05:25:29.651632 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:25:29.651640 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:25:29.651649 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:25:29.651657 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:25:29.651665 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:25:29.651674 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:25:29.651684 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:25:29.651692 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:25:29.651700 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:25:29.651709 kernel: landlock: Up and running. Oct 13 05:25:29.651717 kernel: SELinux: Initializing. Oct 13 05:25:29.651725 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.651736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.651747 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:25:29.651755 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:25:29.651764 kernel: ... version: 0 Oct 13 05:25:29.651775 kernel: ... bit width: 48 Oct 13 05:25:29.651783 kernel: ... generic registers: 6 Oct 13 05:25:29.651791 kernel: ... value mask: 0000ffffffffffff Oct 13 05:25:29.651802 kernel: ... max period: 00007fffffffffff Oct 13 05:25:29.651812 kernel: ... fixed-purpose events: 0 Oct 13 05:25:29.651820 kernel: ... event mask: 000000000000003f Oct 13 05:25:29.651829 kernel: signal: max sigframe size: 1776 Oct 13 05:25:29.651837 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:25:29.651845 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:25:29.651854 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:25:29.651862 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:25:29.651870 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:25:29.651881 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:25:29.651889 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:25:29.651897 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Oct 13 05:25:29.651906 kernel: Memory: 2439928K/2552216K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 106348K reserved, 0K cma-reserved) Oct 13 05:25:29.651914 kernel: devtmpfs: initialized Oct 13 05:25:29.651923 kernel: x86/mm: Memory block size: 128MB Oct 13 05:25:29.651931 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Oct 13 05:25:29.651950 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Oct 13 05:25:29.651966 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:25:29.651975 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.651983 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:25:29.651991 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:25:29.652000 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:25:29.652008 kernel: audit: type=2000 audit(1760333127.156:1): state=initialized audit_enabled=0 res=1 Oct 13 05:25:29.652019 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:25:29.652027 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:25:29.652035 kernel: cpuidle: using governor menu Oct 13 05:25:29.652044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:25:29.652053 kernel: dca service started, version 1.12.1 Oct 13 05:25:29.652061 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 13 05:25:29.652069 kernel: PCI: Using configuration type 1 for base access Oct 13 05:25:29.652082 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:25:29.652091 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:25:29.652099 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:25:29.652107 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:25:29.652116 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:25:29.652124 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:25:29.652132 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:25:29.652143 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:25:29.652151 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:25:29.652159 kernel: ACPI: Interpreter enabled Oct 13 05:25:29.652167 kernel: ACPI: PM: (supports S0 S5) Oct 13 05:25:29.652175 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:25:29.652184 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:25:29.652192 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:25:29.652202 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:25:29.652211 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:25:29.652464 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:25:29.652675 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:25:29.652853 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:25:29.652865 kernel: PCI host bridge to bus 0000:00 Oct 13 05:25:29.653149 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:25:29.653347 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:25:29.653552 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:25:29.653725 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 13 05:25:29.653882 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 13 05:25:29.654089 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:25:29.654251 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:25:29.654472 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:25:29.654769 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:25:29.654947 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 13 05:25:29.655193 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 13 05:25:29.655372 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 13 05:25:29.655770 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:25:29.655976 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:25:29.656486 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 13 05:25:29.657494 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 13 05:25:29.657835 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 13 05:25:29.658211 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:25:29.658404 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 13 05:25:29.658658 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 13 05:25:29.658892 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 13 05:25:29.659147 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:25:29.659378 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 13 05:25:29.659647 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 13 05:25:29.659869 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 13 05:25:29.660101 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 13 05:25:29.660350 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:25:29.660671 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:25:29.660938 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:25:29.661169 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 13 05:25:29.661388 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 13 05:25:29.661672 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:25:29.661893 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 13 05:25:29.661917 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:25:29.661929 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:25:29.661941 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:25:29.661952 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:25:29.661975 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:25:29.661987 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:25:29.661999 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:25:29.662015 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:25:29.662026 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:25:29.662038 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:25:29.662050 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:25:29.662061 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:25:29.662072 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:25:29.662084 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:25:29.662100 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:25:29.662111 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:25:29.662122 kernel: iommu: Default domain type: Translated Oct 13 05:25:29.662133 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:25:29.662145 kernel: efivars: Registered efivars operations Oct 13 05:25:29.662157 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:25:29.662168 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:25:29.662184 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Oct 13 05:25:29.662196 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Oct 13 05:25:29.662207 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Oct 13 05:25:29.662218 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Oct 13 05:25:29.662230 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Oct 13 05:25:29.662471 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:25:29.662757 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:25:29.663010 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:25:29.663028 kernel: vgaarb: loaded Oct 13 05:25:29.663041 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:25:29.663053 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:25:29.663065 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:25:29.663077 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:25:29.663089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:25:29.663107 kernel: pnp: PnP ACPI init Oct 13 05:25:29.663363 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 13 05:25:29.663384 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:25:29.663396 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:25:29.663408 kernel: NET: Registered PF_INET protocol family Oct 13 05:25:29.663420 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:25:29.663432 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:25:29.663450 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:25:29.663462 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:25:29.663473 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:25:29.663485 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:25:29.663497 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.663546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:25:29.663558 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:25:29.663576 kernel: NET: Registered PF_XDP protocol family Oct 13 05:25:29.663809 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 13 05:25:29.664039 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 13 05:25:29.664250 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:25:29.664470 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:25:29.664701 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:25:29.664911 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 13 05:25:29.665120 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 13 05:25:29.666540 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 13 05:25:29.666565 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:25:29.666579 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Oct 13 05:25:29.666592 kernel: Initialise system trusted keyrings Oct 13 05:25:29.666604 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:25:29.666623 kernel: Key type asymmetric registered Oct 13 05:25:29.666635 kernel: Asymmetric key parser 'x509' registered Oct 13 05:25:29.666668 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:25:29.666683 kernel: io scheduler mq-deadline registered Oct 13 05:25:29.666695 kernel: io scheduler kyber registered Oct 13 05:25:29.666708 kernel: io scheduler bfq registered Oct 13 05:25:29.666722 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:25:29.666739 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:25:29.666752 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:25:29.666765 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:25:29.666778 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:25:29.666790 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:25:29.666802 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:25:29.666814 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:25:29.666830 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:25:29.667082 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:25:29.667297 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:25:29.667316 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:25:29.667541 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:25:27 UTC (1760333127) Oct 13 05:25:29.667744 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 13 05:25:29.667769 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:25:29.667782 kernel: efifb: probing for efifb Oct 13 05:25:29.667794 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 13 05:25:29.667807 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 13 05:25:29.667819 kernel: efifb: scrolling: redraw Oct 13 05:25:29.667831 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 13 05:25:29.667842 kernel: Console: switching to colour frame buffer device 160x50 Oct 13 05:25:29.667859 kernel: fb0: EFI VGA frame buffer device Oct 13 05:25:29.667875 kernel: pstore: Using crash dump compression: deflate Oct 13 05:25:29.667887 kernel: pstore: Registered efi_pstore as persistent store backend Oct 13 05:25:29.667899 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:25:29.667911 kernel: Segment Routing with IPv6 Oct 13 05:25:29.667927 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:25:29.667939 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:25:29.667950 kernel: Key type dns_resolver registered Oct 13 05:25:29.667972 kernel: IPI shorthand broadcast: enabled Oct 13 05:25:29.667984 kernel: sched_clock: Marking stable (1770002485, 277286889)->(2208222774, -160933400) Oct 13 05:25:29.667996 kernel: registered taskstats version 1 Oct 13 05:25:29.668008 kernel: Loading compiled-in X.509 certificates Oct 13 05:25:29.668024 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:25:29.668037 kernel: Demotion targets for Node 0: null Oct 13 05:25:29.668050 kernel: Key type .fscrypt registered Oct 13 05:25:29.668062 kernel: Key type fscrypt-provisioning registered Oct 13 05:25:29.668074 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:25:29.668087 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:25:29.668100 kernel: ima: No architecture policies found Oct 13 05:25:29.668116 kernel: clk: Disabling unused clocks Oct 13 05:25:29.668128 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:25:29.668141 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:25:29.668153 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:25:29.668166 kernel: Run /init as init process Oct 13 05:25:29.668179 kernel: with arguments: Oct 13 05:25:29.668192 kernel: /init Oct 13 05:25:29.668208 kernel: with environment: Oct 13 05:25:29.668220 kernel: HOME=/ Oct 13 05:25:29.668232 kernel: TERM=linux Oct 13 05:25:29.668245 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:25:29.668257 kernel: SCSI subsystem initialized Oct 13 05:25:29.668270 kernel: libata version 3.00 loaded. Oct 13 05:25:29.668525 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:25:29.668558 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:25:29.668809 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:25:29.669018 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:25:29.669238 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:25:29.669452 kernel: scsi host0: ahci Oct 13 05:25:29.669680 kernel: scsi host1: ahci Oct 13 05:25:29.669876 kernel: scsi host2: ahci Oct 13 05:25:29.670093 kernel: scsi host3: ahci Oct 13 05:25:29.670290 kernel: scsi host4: ahci Oct 13 05:25:29.670475 kernel: scsi host5: ahci Oct 13 05:25:29.670489 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 13 05:25:29.670498 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 13 05:25:29.670527 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 13 05:25:29.670536 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 13 05:25:29.670546 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 13 05:25:29.670555 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 13 05:25:29.670564 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.670573 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:25:29.670585 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.670593 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.670602 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.670612 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:25:29.670621 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:25:29.670630 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:25:29.670639 kernel: ata3.00: applying bridge limits Oct 13 05:25:29.670648 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:25:29.670659 kernel: ata3.00: configured for UDMA/100 Oct 13 05:25:29.670903 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:25:29.671127 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:25:29.671305 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 13 05:25:29.671317 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:25:29.671330 kernel: GPT:16515071 != 27000831 Oct 13 05:25:29.671340 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:25:29.671349 kernel: GPT:16515071 != 27000831 Oct 13 05:25:29.671357 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:25:29.671366 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:25:29.671376 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672058 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:25:29.672080 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:25:29.672278 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:25:29.672290 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:25:29.672300 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:25:29.672310 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:25:29.672319 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:25:29.672328 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672340 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672349 kernel: raid6: avx2x4 gen() 28457 MB/s Oct 13 05:25:29.672358 kernel: raid6: avx2x2 gen() 25981 MB/s Oct 13 05:25:29.672367 kernel: raid6: avx2x1 gen() 24008 MB/s Oct 13 05:25:29.672376 kernel: raid6: using algorithm avx2x4 gen() 28457 MB/s Oct 13 05:25:29.672384 kernel: raid6: .... xor() 5797 MB/s, rmw enabled Oct 13 05:25:29.672393 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:25:29.672402 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672413 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672422 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672430 kernel: xor: automatically using best checksumming function avx Oct 13 05:25:29.672439 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672448 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:25:29.672457 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (195) Oct 13 05:25:29.672466 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:25:29.672475 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:29.672486 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:25:29.672495 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:25:29.672521 kernel: Lockdown: modprobe: unsigned module loading is restricted; see man kernel_lockdown.7 Oct 13 05:25:29.672530 kernel: loop: module loaded Oct 13 05:25:29.672539 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:25:29.672548 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:25:29.672561 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:25:29.672577 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:25:29.672589 systemd[1]: Detected virtualization kvm. Oct 13 05:25:29.672599 systemd[1]: Detected architecture x86-64. Oct 13 05:25:29.672608 systemd[1]: Running in initrd. Oct 13 05:25:29.672618 systemd[1]: No hostname configured, using default hostname. Oct 13 05:25:29.672630 systemd[1]: Hostname set to . Oct 13 05:25:29.672639 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:25:29.672649 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:25:29.672658 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:25:29.672668 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:25:29.672678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:25:29.672688 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:25:29.672700 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:25:29.672710 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:25:29.672720 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:25:29.672729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:25:29.672739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:25:29.672749 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:25:29.672760 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:25:29.672770 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:25:29.672779 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:25:29.672789 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:25:29.672798 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:25:29.672808 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:25:29.672819 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:25:29.672829 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:25:29.672838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:25:29.672848 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:25:29.672857 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:25:29.672867 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:25:29.672878 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:25:29.672893 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:25:29.672907 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:25:29.672919 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:25:29.672931 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:25:29.672948 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:25:29.672973 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:25:29.672986 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:25:29.673002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:29.673012 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:25:29.673022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:25:29.673033 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:25:29.673043 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:25:29.673093 systemd-journald[331]: Collecting audit messages is disabled. Oct 13 05:25:29.673127 systemd-journald[331]: Journal started Oct 13 05:25:29.673157 systemd-journald[331]: Runtime Journal (/run/log/journal/b6679da3d57142db8c82e502f4eeef9b) is 6M, max 48.2M, 42.2M free. Oct 13 05:25:29.675542 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:25:29.681736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:25:29.696785 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:25:29.700130 systemd-modules-load[332]: Inserted module 'br_netfilter' Oct 13 05:25:29.700534 kernel: Bridge firewalling registered Oct 13 05:25:29.703975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:25:29.705862 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:25:29.715216 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:25:29.718886 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:25:29.722341 systemd-tmpfiles[347]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:25:29.723629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:29.741338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:25:29.750874 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:25:29.760271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:25:29.765160 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:25:29.770049 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:25:29.790427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:25:29.795993 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:25:29.839943 dracut-cmdline[374]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:25:29.861343 systemd-resolved[362]: Positive Trust Anchors: Oct 13 05:25:29.861367 systemd-resolved[362]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:25:29.861373 systemd-resolved[362]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:25:29.861419 systemd-resolved[362]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:25:29.890353 systemd-resolved[362]: Defaulting to hostname 'linux'. Oct 13 05:25:29.892075 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:25:29.918290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:25:30.059601 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:25:30.078584 kernel: iscsi: registered transport (tcp) Oct 13 05:25:30.113678 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:25:30.113786 kernel: QLogic iSCSI HBA Driver Oct 13 05:25:30.156367 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:25:30.196493 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:25:30.203628 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:25:30.284477 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:25:30.288699 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:25:30.292136 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:25:30.354356 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:25:30.361156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:25:30.455899 systemd-udevd[614]: Using default interface naming scheme 'v257'. Oct 13 05:25:30.470902 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:25:30.474292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:25:30.483322 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:25:30.485481 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:25:30.548022 dracut-pre-trigger[712]: rd.md=0: removing MD RAID activation Oct 13 05:25:30.578230 systemd-networkd[713]: lo: Link UP Oct 13 05:25:30.578240 systemd-networkd[713]: lo: Gained carrier Oct 13 05:25:30.579172 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:25:30.581667 systemd[1]: Reached target network.target - Network. Oct 13 05:25:30.588291 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:25:30.592625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:25:30.737278 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:25:30.741196 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:25:30.826245 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:25:30.871721 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:25:30.897574 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:25:30.917560 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:25:30.928081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:25:30.933735 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:25:30.955970 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 13 05:25:30.956704 kernel: AES CTR mode by8 optimization enabled Oct 13 05:25:30.961744 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:30.961761 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:25:30.968753 systemd-networkd[713]: eth0: Link UP Oct 13 05:25:30.969092 systemd-networkd[713]: eth0: Gained carrier Oct 13 05:25:30.969110 systemd-networkd[713]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:30.983976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:25:30.984078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:30.995653 disk-uuid[825]: Primary Header is updated. Oct 13 05:25:30.995653 disk-uuid[825]: Secondary Entries is updated. Oct 13 05:25:30.995653 disk-uuid[825]: Secondary Header is updated. Oct 13 05:25:30.988689 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:30.991552 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:25:30.992914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:31.054958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:31.068306 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:25:31.083438 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:25:31.088257 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:25:31.090220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:25:31.095719 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:25:31.141108 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:25:32.065358 disk-uuid[835]: Warning: The kernel is still using the old partition table. Oct 13 05:25:32.065358 disk-uuid[835]: The new table will be used at the next reboot or after you Oct 13 05:25:32.065358 disk-uuid[835]: run partprobe(8) or kpartx(8) Oct 13 05:25:32.065358 disk-uuid[835]: The operation has completed successfully. Oct 13 05:25:32.083729 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:25:32.083927 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:25:32.089859 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:25:32.131830 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (878) Oct 13 05:25:32.131886 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:32.131909 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:32.137678 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:25:32.137711 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:25:32.146541 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:32.147974 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:25:32.153200 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:25:32.340277 ignition[897]: Ignition 2.22.0 Oct 13 05:25:32.340298 ignition[897]: Stage: fetch-offline Oct 13 05:25:32.340375 ignition[897]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:32.340392 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:32.340586 ignition[897]: parsed url from cmdline: "" Oct 13 05:25:32.340591 ignition[897]: no config URL provided Oct 13 05:25:32.340599 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:25:32.340612 ignition[897]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:25:32.340679 ignition[897]: op(1): [started] loading QEMU firmware config module Oct 13 05:25:32.340714 ignition[897]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:25:32.362579 ignition[897]: op(1): [finished] loading QEMU firmware config module Oct 13 05:25:32.453000 ignition[897]: parsing config with SHA512: c8ef77c532cf84fff798cc9ffe47c53db28e594ec80f5963677e1344e5bcd97f8f7fdf070f58a38ac8b9188bbd98de4eb3967fb30e392ebae55d669a3d331b75 Oct 13 05:25:32.460183 unknown[897]: fetched base config from "system" Oct 13 05:25:32.460197 unknown[897]: fetched user config from "qemu" Oct 13 05:25:32.460604 ignition[897]: fetch-offline: fetch-offline passed Oct 13 05:25:32.460667 ignition[897]: Ignition finished successfully Oct 13 05:25:32.465699 systemd-networkd[713]: eth0: Gained IPv6LL Oct 13 05:25:32.466033 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:25:32.471702 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:25:32.475668 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:25:32.512687 ignition[907]: Ignition 2.22.0 Oct 13 05:25:32.512704 ignition[907]: Stage: kargs Oct 13 05:25:32.512853 ignition[907]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:32.512865 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:32.513718 ignition[907]: kargs: kargs passed Oct 13 05:25:32.513770 ignition[907]: Ignition finished successfully Oct 13 05:25:32.523780 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:25:32.527392 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:25:32.587904 ignition[916]: Ignition 2.22.0 Oct 13 05:25:32.587920 ignition[916]: Stage: disks Oct 13 05:25:32.588088 ignition[916]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:32.588099 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:32.594190 ignition[916]: disks: disks passed Oct 13 05:25:32.595408 ignition[916]: Ignition finished successfully Oct 13 05:25:32.598698 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:25:32.602089 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:25:32.602188 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:25:32.605643 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:25:32.609422 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:25:32.612737 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:25:32.618714 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:25:32.671092 systemd-fsck[926]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 13 05:25:32.678832 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:25:32.683585 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:25:32.816574 kernel: EXT4-fs (vda9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:25:32.817375 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:25:32.818195 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:25:32.824378 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:25:32.825700 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:25:32.830284 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:25:32.830363 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:25:32.830413 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:25:32.854108 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:25:32.861254 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (935) Oct 13 05:25:32.861316 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:32.861405 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:25:32.866351 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:32.870149 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:25:32.870256 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:25:32.872085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:25:32.937303 initrd-setup-root[959]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:25:32.943644 initrd-setup-root[966]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:25:32.947811 initrd-setup-root[973]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:25:32.953779 initrd-setup-root[980]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:25:33.077043 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:25:33.080647 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:25:33.081795 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:25:33.121658 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:25:33.124153 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:33.135638 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:25:33.165129 ignition[1049]: INFO : Ignition 2.22.0 Oct 13 05:25:33.165129 ignition[1049]: INFO : Stage: mount Oct 13 05:25:33.168237 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:33.168237 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:33.172381 ignition[1049]: INFO : mount: mount passed Oct 13 05:25:33.172381 ignition[1049]: INFO : Ignition finished successfully Oct 13 05:25:33.178658 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:25:33.183981 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:25:33.208275 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:25:33.245660 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1061) Oct 13 05:25:33.245707 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:25:33.245791 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:25:33.251357 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:25:33.251383 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:25:33.253474 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:25:33.295595 ignition[1078]: INFO : Ignition 2.22.0 Oct 13 05:25:33.295595 ignition[1078]: INFO : Stage: files Oct 13 05:25:33.298541 ignition[1078]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:33.298541 ignition[1078]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:33.298541 ignition[1078]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:25:33.304375 ignition[1078]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:25:33.304375 ignition[1078]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:25:33.309186 ignition[1078]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:25:33.309186 ignition[1078]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:25:33.309186 ignition[1078]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:25:33.309186 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:25:33.309186 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:25:33.305640 unknown[1078]: wrote ssh authorized keys file for user: core Oct 13 05:25:33.346400 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:25:33.420435 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:25:33.420435 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:25:33.426901 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:25:33.426901 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:25:33.426901 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:25:33.426901 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:25:33.426901 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:25:33.426901 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:25:33.426901 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:25:33.447417 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:25:33.447417 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:25:33.447417 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:25:33.447417 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:25:33.447417 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:25:33.447417 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 13 05:25:33.931126 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:25:34.228752 ignition[1078]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 05:25:34.228752 ignition[1078]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:25:34.235336 ignition[1078]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:25:34.238941 ignition[1078]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:25:34.238941 ignition[1078]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:25:34.238941 ignition[1078]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:25:34.238941 ignition[1078]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:25:34.238941 ignition[1078]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:25:34.238941 ignition[1078]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:25:34.238941 ignition[1078]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:25:34.393409 ignition[1078]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:25:34.432824 ignition[1078]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:25:34.436007 ignition[1078]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:25:34.436007 ignition[1078]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:25:34.436007 ignition[1078]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:25:34.436007 ignition[1078]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:25:34.436007 ignition[1078]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:25:34.436007 ignition[1078]: INFO : files: files passed Oct 13 05:25:34.436007 ignition[1078]: INFO : Ignition finished successfully Oct 13 05:25:34.464330 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:25:34.468952 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:25:34.480139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:25:34.486862 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:25:34.486996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:25:34.496640 initrd-setup-root-after-ignition[1108]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:25:34.502588 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:25:34.505652 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:25:34.508389 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:25:34.515119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:25:34.519876 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:25:34.525266 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:25:34.664302 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:25:34.664446 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:25:34.666863 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:25:34.671996 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:25:34.676005 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:25:34.679781 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:25:34.720326 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:25:34.722930 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:25:34.758735 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:25:34.759063 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:25:34.763239 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:25:34.765371 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:25:34.770772 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:25:34.770954 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:25:34.777977 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:25:34.779992 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:25:34.783404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:25:34.784962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:25:34.785562 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:25:34.794382 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:25:34.798965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:25:34.800490 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:25:34.806476 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:25:34.810669 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:25:34.815312 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:25:34.818429 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:25:34.818665 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:25:34.823402 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:25:34.825235 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:25:34.830780 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:25:34.830946 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:25:34.834685 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:25:34.834864 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:25:34.840397 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:25:34.840605 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:25:34.844144 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:25:34.847074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:25:34.847351 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:25:34.850876 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:25:34.854285 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:25:34.857312 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:25:34.857446 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:25:34.860737 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:25:34.860851 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:25:34.864962 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:25:34.865120 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:25:34.868620 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:25:34.868874 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:25:34.874724 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:25:34.878455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:25:34.880030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:25:34.880264 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:25:34.885226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:25:34.885376 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:25:34.889377 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:25:34.889578 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:25:34.906897 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:25:34.908578 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:25:34.926990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:25:34.929186 ignition[1134]: INFO : Ignition 2.22.0 Oct 13 05:25:34.931193 ignition[1134]: INFO : Stage: umount Oct 13 05:25:34.931193 ignition[1134]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:25:34.931193 ignition[1134]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:25:34.931193 ignition[1134]: INFO : umount: umount passed Oct 13 05:25:34.931193 ignition[1134]: INFO : Ignition finished successfully Oct 13 05:25:34.933408 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:25:34.933666 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:25:34.938439 systemd[1]: Stopped target network.target - Network. Oct 13 05:25:34.942749 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:25:34.942852 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:25:34.946174 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:25:34.946250 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:25:34.949749 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:25:34.949820 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:25:34.951542 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:25:34.951612 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:25:34.956922 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:25:34.961184 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:25:34.975174 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:25:34.975328 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:25:34.983142 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:25:34.983323 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:25:34.989682 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:25:34.992475 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:25:34.992557 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:25:35.000239 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:25:35.001939 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:25:35.002005 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:25:35.007896 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:25:35.007956 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:25:35.012356 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:25:35.012438 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:25:35.012644 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:25:35.019068 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:25:35.019266 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:25:35.024274 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:25:35.024334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:25:35.034732 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:25:35.034991 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:25:35.038453 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:25:35.038531 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:25:35.041526 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:25:35.041577 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:25:35.045581 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:25:35.045657 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:25:35.051376 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:25:35.051451 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:25:35.056178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:25:35.056244 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:25:35.065910 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:25:35.067428 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:25:35.067504 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:25:35.071153 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:25:35.071216 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:25:35.076809 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:25:35.076901 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:25:35.078717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:25:35.078788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:25:35.082439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:25:35.082504 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:35.095803 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:25:35.095969 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:25:35.119909 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:25:35.120058 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:25:35.125241 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:25:35.126471 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:25:35.154066 systemd[1]: Switching root. Oct 13 05:25:35.204990 systemd-journald[331]: Journal stopped Oct 13 05:25:37.304547 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Oct 13 05:25:37.304642 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:25:37.304669 kernel: SELinux: policy capability open_perms=1 Oct 13 05:25:37.304686 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:25:37.304721 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:25:37.304744 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:25:37.304764 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:25:37.304792 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:25:37.304808 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:25:37.304824 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:25:37.304840 kernel: audit: type=1403 audit(1760333136.212:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:25:37.304868 systemd[1]: Successfully loaded SELinux policy in 74.352ms. Oct 13 05:25:37.304898 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.279ms. Oct 13 05:25:37.304917 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:25:37.304934 systemd[1]: Detected virtualization kvm. Oct 13 05:25:37.304953 systemd[1]: Detected architecture x86-64. Oct 13 05:25:37.304970 systemd[1]: Detected first boot. Oct 13 05:25:37.304988 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 05:25:37.305018 zram_generator::config[1179]: No configuration found. Oct 13 05:25:37.305039 kernel: Guest personality initialized and is inactive Oct 13 05:25:37.305056 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:25:37.305074 kernel: Initialized host personality Oct 13 05:25:37.305091 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:25:37.305109 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:25:37.305141 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:25:37.305161 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:25:37.305181 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:25:37.305200 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:25:37.305220 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:25:37.305238 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:25:37.305255 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:25:37.305284 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:25:37.305311 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:25:37.305328 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:25:37.305346 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:25:37.305363 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:25:37.305381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:25:37.305398 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:25:37.305425 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:25:37.305444 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:25:37.305462 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:25:37.305479 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:25:37.305496 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:25:37.305530 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:25:37.305559 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:25:37.305578 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:25:37.305595 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:25:37.305613 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:25:37.305630 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:25:37.305649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:25:37.305666 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:25:37.305683 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:25:37.305718 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:25:37.305741 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:25:37.305759 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:25:37.305788 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:25:37.305806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:25:37.305823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:25:37.305839 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:25:37.305867 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:25:37.305889 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:25:37.305905 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:25:37.305924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:37.305940 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:25:37.305955 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:25:37.305970 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:25:37.305995 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:25:37.306012 systemd[1]: Reached target machines.target - Containers. Oct 13 05:25:37.306028 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:25:37.306043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:37.306061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:25:37.306079 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:25:37.306107 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:25:37.306125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:25:37.306146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:25:37.306163 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:25:37.306182 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:25:37.306201 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:25:37.306218 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:25:37.306247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:25:37.306268 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:25:37.306286 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:25:37.306305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:37.306322 kernel: fuse: init (API version 7.41) Oct 13 05:25:37.306339 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:25:37.306357 kernel: ACPI: bus type drm_connector registered Oct 13 05:25:37.306383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:25:37.306404 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:25:37.306422 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:25:37.306440 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:25:37.306483 systemd-journald[1243]: Collecting audit messages is disabled. Oct 13 05:25:37.306541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:25:37.306574 systemd-journald[1243]: Journal started Oct 13 05:25:37.306605 systemd-journald[1243]: Runtime Journal (/run/log/journal/b6679da3d57142db8c82e502f4eeef9b) is 6M, max 48.2M, 42.2M free. Oct 13 05:25:36.818339 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:25:36.840009 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:25:36.840718 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:25:37.311568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:37.315653 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:25:37.319394 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:25:37.321700 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:25:37.323676 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:25:37.325461 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:25:37.327463 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:25:37.329407 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:25:37.331377 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:25:37.333740 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:25:37.333977 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:25:37.336204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:25:37.336419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:25:37.338596 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:25:37.338830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:25:37.340997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:25:37.341213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:25:37.370747 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:25:37.370980 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:25:37.373038 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:25:37.373277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:25:37.375532 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:25:37.377868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:25:37.381699 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:25:37.384343 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:25:37.408432 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:25:37.413191 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:25:37.417710 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:25:37.421747 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:25:37.423624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:25:37.423685 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:25:37.426384 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:25:37.428596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:37.430547 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:25:37.433299 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:25:37.435207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:25:37.436232 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:25:37.438324 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:25:37.440631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:25:37.443448 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:25:37.447660 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:25:37.453969 systemd-journald[1243]: Time spent on flushing to /var/log/journal/b6679da3d57142db8c82e502f4eeef9b is 17.677ms for 1031 entries. Oct 13 05:25:37.453969 systemd-journald[1243]: System Journal (/var/log/journal/b6679da3d57142db8c82e502f4eeef9b) is 8M, max 163.5M, 155.5M free. Oct 13 05:25:37.624536 systemd-journald[1243]: Received client request to flush runtime journal. Oct 13 05:25:37.624598 kernel: loop1: detected capacity change from 0 to 128048 Oct 13 05:25:37.624627 kernel: loop2: detected capacity change from 0 to 229808 Oct 13 05:25:37.455964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:25:37.509558 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:25:37.517418 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:25:37.565178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:25:37.569646 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 13 05:25:37.569660 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Oct 13 05:25:37.575336 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:25:37.611354 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:25:37.613587 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:25:37.617938 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:25:37.631813 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:25:37.640745 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:25:37.643880 kernel: loop3: detected capacity change from 0 to 110984 Oct 13 05:25:37.647021 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:25:37.660701 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:25:37.680540 kernel: loop4: detected capacity change from 0 to 128048 Oct 13 05:25:37.711532 kernel: loop5: detected capacity change from 0 to 229808 Oct 13 05:25:37.719526 kernel: loop6: detected capacity change from 0 to 110984 Oct 13 05:25:37.729353 (sd-merge)[1320]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 13 05:25:37.733440 (sd-merge)[1320]: Merged extensions into '/usr'. Oct 13 05:25:37.738187 systemd[1]: Reload requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:25:37.738204 systemd[1]: Reloading... Oct 13 05:25:37.785538 zram_generator::config[1348]: No configuration found. Oct 13 05:25:38.006193 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:25:38.006828 systemd[1]: Reloading finished in 268 ms. Oct 13 05:25:38.038579 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:25:38.041882 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:25:38.065188 systemd[1]: Starting ensure-sysext.service... Oct 13 05:25:38.069919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:25:38.073771 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:25:38.078879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:25:38.099653 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:25:38.107368 systemd[1]: Reload requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:25:38.107499 systemd[1]: Reloading... Oct 13 05:25:38.114396 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Oct 13 05:25:38.114787 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Oct 13 05:25:38.127575 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:25:38.127633 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:25:38.127952 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:25:38.128414 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:25:38.129417 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:25:38.129695 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Oct 13 05:25:38.129787 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Oct 13 05:25:38.136408 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:25:38.136562 systemd-tmpfiles[1387]: Skipping /boot Oct 13 05:25:38.200557 zram_generator::config[1418]: No configuration found. Oct 13 05:25:38.207754 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:25:38.207940 systemd-tmpfiles[1387]: Skipping /boot Oct 13 05:25:38.320851 systemd-resolved[1385]: Positive Trust Anchors: Oct 13 05:25:38.320870 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:25:38.320876 systemd-resolved[1385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:25:38.320915 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:25:38.418017 systemd-resolved[1385]: Defaulting to hostname 'linux'. Oct 13 05:25:38.423787 systemd[1]: Reloading finished in 315 ms. Oct 13 05:25:38.445488 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:25:38.447750 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:25:38.450076 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:25:38.481617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:25:38.484447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:25:38.493849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:25:38.497889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:25:38.501242 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:25:38.515112 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:25:38.519229 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:25:38.524426 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:25:38.536336 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:25:38.545680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:38.549317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:25:38.554665 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:25:38.559299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:25:38.561425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:38.561707 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:38.571359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:38.571677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:38.571888 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:38.580449 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:25:38.581856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:25:38.588034 systemd-udevd[1470]: Using default interface naming scheme 'v257'. Oct 13 05:25:38.588635 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:25:38.592060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:25:38.592339 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:25:38.595301 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:25:38.598857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:25:38.599138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:25:38.610244 systemd[1]: Finished ensure-sysext.service. Oct 13 05:25:38.611002 augenrules[1498]: No rules Oct 13 05:25:38.612667 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:25:38.613000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:25:38.619486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:25:38.621749 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:25:38.623823 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:25:38.623875 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:25:38.623931 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:25:38.624004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:25:38.627967 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:25:38.637651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:25:38.640889 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:25:38.641120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:25:38.649176 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:25:38.668633 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:25:38.671430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:25:38.743774 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:25:38.861261 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:25:38.870282 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:25:38.920831 systemd-networkd[1518]: lo: Link UP Oct 13 05:25:38.920842 systemd-networkd[1518]: lo: Gained carrier Oct 13 05:25:38.922373 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:25:38.924423 systemd[1]: Reached target network.target - Network. Oct 13 05:25:38.927983 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:25:38.970711 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:25:39.021392 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:25:39.021474 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 13 05:25:39.024814 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:39.024829 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:25:39.026418 systemd-networkd[1518]: eth0: Link UP Oct 13 05:25:39.027830 systemd-networkd[1518]: eth0: Gained carrier Oct 13 05:25:39.027863 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 05:25:39.030536 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:25:39.036702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:25:39.040485 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:39.041746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:25:39.044209 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:25:39.044668 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:25:39.087758 systemd-networkd[1518]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:25:39.089169 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Oct 13 05:25:40.616801 systemd-timesyncd[1506]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:25:40.616984 systemd-timesyncd[1506]: Initial clock synchronization to Mon 2025-10-13 05:25:40.616411 UTC. Oct 13 05:25:40.617454 systemd-resolved[1385]: Clock change detected. Flushing caches. Oct 13 05:25:40.641368 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 13 05:25:40.641781 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:25:40.646734 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:25:40.656159 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:25:40.779275 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:40.808321 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:25:40.809756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:40.823676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:25:40.983867 kernel: kvm_amd: TSC scaling supported Oct 13 05:25:40.983992 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:25:40.984037 kernel: kvm_amd: Nested Paging enabled Oct 13 05:25:40.986002 kernel: kvm_amd: LBR virtualization supported Oct 13 05:25:40.986050 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:25:40.986953 kernel: kvm_amd: Virtual GIF supported Oct 13 05:25:41.025988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:25:41.034642 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:25:41.063920 ldconfig[1466]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:25:41.072348 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:25:41.076426 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:25:41.116066 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:25:41.118780 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:25:41.120902 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:25:41.123154 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:25:41.125337 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:25:41.127609 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:25:41.129710 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:25:41.131980 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:25:41.134116 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:25:41.134159 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:25:41.135746 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:25:41.138495 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:25:41.142194 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:25:41.146338 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:25:41.148850 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:25:41.151086 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:25:41.160285 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:25:41.162480 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:25:41.165247 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:25:41.168142 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:25:41.169807 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:25:41.171424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:25:41.171457 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:25:41.173128 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:25:41.176639 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:25:41.180081 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:25:41.184588 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:25:41.188512 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:25:41.190382 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:25:41.205014 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:25:41.208615 jq[1583]: false Oct 13 05:25:41.209232 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:25:41.214526 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:25:41.217619 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:25:41.220844 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:25:41.222253 extend-filesystems[1584]: Found /dev/vda6 Oct 13 05:25:41.226829 extend-filesystems[1584]: Found /dev/vda9 Oct 13 05:25:41.229011 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing passwd entry cache Oct 13 05:25:41.229023 oslogin_cache_refresh[1585]: Refreshing passwd entry cache Oct 13 05:25:41.231641 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:25:41.233544 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:25:41.234187 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:25:41.234576 extend-filesystems[1584]: Checking size of /dev/vda9 Oct 13 05:25:41.238577 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:25:41.244509 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting users, quitting Oct 13 05:25:41.244509 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:25:41.244509 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Refreshing group entry cache Oct 13 05:25:41.243950 oslogin_cache_refresh[1585]: Failure getting users, quitting Oct 13 05:25:41.243972 oslogin_cache_refresh[1585]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:25:41.244039 oslogin_cache_refresh[1585]: Refreshing group entry cache Oct 13 05:25:41.246309 extend-filesystems[1584]: Resized partition /dev/vda9 Oct 13 05:25:41.246977 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:25:41.250303 extend-filesystems[1607]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:25:41.256028 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Failure getting groups, quitting Oct 13 05:25:41.256028 google_oslogin_nss_cache[1585]: oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:25:41.255302 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:25:41.254740 oslogin_cache_refresh[1585]: Failure getting groups, quitting Oct 13 05:25:41.254761 oslogin_cache_refresh[1585]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:25:41.258413 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:25:41.259046 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:25:41.260235 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:25:41.260528 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:25:41.264279 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:25:41.264395 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 13 05:25:41.265729 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:25:41.267297 update_engine[1602]: I20251013 05:25:41.267143 1602 main.cc:92] Flatcar Update Engine starting Oct 13 05:25:41.278085 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:25:41.278381 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:25:41.292387 jq[1606]: true Oct 13 05:25:41.312388 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 13 05:25:41.317791 (ntainerd)[1625]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:25:41.353565 systemd-logind[1600]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:25:41.353632 systemd-logind[1600]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:25:41.354009 extend-filesystems[1607]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:25:41.354009 extend-filesystems[1607]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:25:41.354009 extend-filesystems[1607]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 13 05:25:41.363815 extend-filesystems[1584]: Resized filesystem in /dev/vda9 Oct 13 05:25:41.354242 systemd-logind[1600]: New seat seat0. Oct 13 05:25:41.356623 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:25:41.367702 jq[1627]: true Oct 13 05:25:41.358045 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:25:41.366020 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:25:41.382080 tar[1614]: linux-amd64/LICENSE Oct 13 05:25:41.382565 tar[1614]: linux-amd64/helm Oct 13 05:25:41.430930 dbus-daemon[1581]: [system] SELinux support is enabled Oct 13 05:25:41.431340 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:25:41.439072 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:25:41.439121 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:25:41.443111 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:25:41.443139 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:25:41.450690 update_engine[1602]: I20251013 05:25:41.450619 1602 update_check_scheduler.cc:74] Next update check in 2m2s Oct 13 05:25:41.451694 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:25:41.451840 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:25:41.465232 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:25:41.655908 locksmithd[1653]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:25:41.675478 sshd_keygen[1616]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:25:41.750324 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:25:41.754484 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:25:41.781774 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:25:41.782072 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:25:41.786286 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:25:41.824616 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:25:41.837548 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:25:41.842062 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:25:41.844588 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:25:41.879186 containerd[1625]: time="2025-10-13T05:25:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:25:41.880148 containerd[1625]: time="2025-10-13T05:25:41.880069303Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:25:41.892752 containerd[1625]: time="2025-10-13T05:25:41.892687786Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.132µs" Oct 13 05:25:41.892752 containerd[1625]: time="2025-10-13T05:25:41.892731308Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:25:41.892752 containerd[1625]: time="2025-10-13T05:25:41.892757197Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:25:41.893031 containerd[1625]: time="2025-10-13T05:25:41.892988801Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:25:41.893031 containerd[1625]: time="2025-10-13T05:25:41.893015702Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:25:41.893102 containerd[1625]: time="2025-10-13T05:25:41.893052601Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:25:41.893175 containerd[1625]: time="2025-10-13T05:25:41.893143972Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:25:41.893175 containerd[1625]: time="2025-10-13T05:25:41.893163409Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:25:41.893648 containerd[1625]: time="2025-10-13T05:25:41.893525729Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:25:41.893648 containerd[1625]: time="2025-10-13T05:25:41.893633000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:25:41.893705 containerd[1625]: time="2025-10-13T05:25:41.893661223Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:25:41.893705 containerd[1625]: time="2025-10-13T05:25:41.893674758Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:25:41.893860 containerd[1625]: time="2025-10-13T05:25:41.893820872Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:25:41.894182 containerd[1625]: time="2025-10-13T05:25:41.894131665Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:25:41.894182 containerd[1625]: time="2025-10-13T05:25:41.894178734Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:25:41.894264 containerd[1625]: time="2025-10-13T05:25:41.894193992Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:25:41.894264 containerd[1625]: time="2025-10-13T05:25:41.894238746Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:25:41.894584 containerd[1625]: time="2025-10-13T05:25:41.894530153Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:25:41.894673 containerd[1625]: time="2025-10-13T05:25:41.894648315Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:25:42.089465 tar[1614]: linux-amd64/README.md Oct 13 05:25:42.122234 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:25:42.182490 systemd-networkd[1518]: eth0: Gained IPv6LL Oct 13 05:25:42.185773 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:25:42.188208 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:25:42.191441 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:25:42.194595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:25:42.197560 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:25:42.248623 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:25:42.251722 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:25:42.252015 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:25:42.256110 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:25:42.805767 containerd[1625]: time="2025-10-13T05:25:42.805678993Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805821240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805844794Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805860824Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805880351Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805895079Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805912331Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805926117Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805939121Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805950613Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805963297Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.805989045Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.806184632Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.806220028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:25:42.806677 containerd[1625]: time="2025-10-13T05:25:42.806243663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806256487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806269010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806280902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806294097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806313674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806325366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806337338Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806372023Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806551650Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806571948Z" level=info msg="Start snapshots syncer" Oct 13 05:25:42.807088 containerd[1625]: time="2025-10-13T05:25:42.806609649Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:25:42.807391 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:25:42.807597 containerd[1625]: time="2025-10-13T05:25:42.806960658Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:25:42.807597 containerd[1625]: time="2025-10-13T05:25:42.807036259Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807155804Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807274196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807295726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807308079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807318940Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807333056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807344117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807379804Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807411493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807426191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807439616Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807506612Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807539944Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:25:42.807791 containerd[1625]: time="2025-10-13T05:25:42.807550725Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807565362Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807573798Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807598264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807612240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807661793Z" level=info msg="runtime interface created" Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807671932Z" level=info msg="created NRI interface" Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807688503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807702850Z" level=info msg="Connect containerd service" Oct 13 05:25:42.808234 containerd[1625]: time="2025-10-13T05:25:42.807727667Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:25:42.809670 containerd[1625]: time="2025-10-13T05:25:42.808851605Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:25:42.810097 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:25:42.813777 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:25:42.941036 containerd[1625]: time="2025-10-13T05:25:42.940973075Z" level=info msg="Start subscribing containerd event" Oct 13 05:25:42.941036 containerd[1625]: time="2025-10-13T05:25:42.941047003Z" level=info msg="Start recovering state" Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941249754Z" level=info msg="Start event monitor" Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941274370Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941287294Z" level=info msg="Start streaming server" Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941312752Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941323171Z" level=info msg="runtime interface starting up..." Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941334813Z" level=info msg="starting plugins..." Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941386510Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941650996Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941719315Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:25:42.941808 containerd[1625]: time="2025-10-13T05:25:42.941800817Z" level=info msg="containerd successfully booted in 1.063666s" Oct 13 05:25:42.942533 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:25:43.091908 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:25:43.095787 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:41374.service - OpenSSH per-connection server daemon (10.0.0.1:41374). Oct 13 05:25:43.189882 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 41374 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:43.192190 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:43.201597 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:25:43.205293 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:25:43.215601 systemd-logind[1600]: New session 1 of user core. Oct 13 05:25:43.249782 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:25:43.266490 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:25:43.285547 (systemd)[1725]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:25:43.289407 systemd-logind[1600]: New session c1 of user core. Oct 13 05:25:43.543987 systemd[1725]: Queued start job for default target default.target. Oct 13 05:25:43.564698 systemd[1725]: Created slice app.slice - User Application Slice. Oct 13 05:25:43.564743 systemd[1725]: Reached target paths.target - Paths. Oct 13 05:25:43.564809 systemd[1725]: Reached target timers.target - Timers. Oct 13 05:25:43.568109 systemd[1725]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:25:43.631086 systemd[1725]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:25:43.631277 systemd[1725]: Reached target sockets.target - Sockets. Oct 13 05:25:43.631344 systemd[1725]: Reached target basic.target - Basic System. Oct 13 05:25:43.631415 systemd[1725]: Reached target default.target - Main User Target. Oct 13 05:25:43.631460 systemd[1725]: Startup finished in 323ms. Oct 13 05:25:43.632246 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:25:43.679754 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:25:43.774539 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:41386.service - OpenSSH per-connection server daemon (10.0.0.1:41386). Oct 13 05:25:43.895622 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 41386 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:43.901798 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:43.921741 systemd-logind[1600]: New session 2 of user core. Oct 13 05:25:43.942438 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:25:43.959584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:25:43.970474 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:25:43.972025 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:25:43.981110 systemd[1]: Startup finished in 3.314s (kernel) + 7.053s (initrd) + 6.316s (userspace) = 16.684s. Oct 13 05:25:44.030102 sshd[1743]: Connection closed by 10.0.0.1 port 41386 Oct 13 05:25:44.032952 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:44.053979 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:41386.service: Deactivated successfully. Oct 13 05:25:44.065667 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:25:44.069316 systemd-logind[1600]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:25:44.078115 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:41394.service - OpenSSH per-connection server daemon (10.0.0.1:41394). Oct 13 05:25:44.079539 systemd-logind[1600]: Removed session 2. Oct 13 05:25:44.168608 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 41394 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:44.169320 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:44.190991 systemd-logind[1600]: New session 3 of user core. Oct 13 05:25:44.210607 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:25:44.297519 sshd[1754]: Connection closed by 10.0.0.1 port 41394 Oct 13 05:25:44.302735 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:44.334083 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:41394.service: Deactivated successfully. Oct 13 05:25:44.349175 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:25:44.352825 systemd-logind[1600]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:25:44.374630 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:41406.service - OpenSSH per-connection server daemon (10.0.0.1:41406). Oct 13 05:25:44.377741 systemd-logind[1600]: Removed session 3. Oct 13 05:25:44.509093 sshd[1766]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:44.511168 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:44.523303 systemd-logind[1600]: New session 4 of user core. Oct 13 05:25:44.533596 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:25:44.624419 sshd[1772]: Connection closed by 10.0.0.1 port 41406 Oct 13 05:25:44.624531 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:44.642571 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:41406.service: Deactivated successfully. Oct 13 05:25:44.647144 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:25:44.655939 systemd-logind[1600]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:25:44.659811 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:41412.service - OpenSSH per-connection server daemon (10.0.0.1:41412). Oct 13 05:25:44.663534 systemd-logind[1600]: Removed session 4. Oct 13 05:25:44.807688 sshd[1778]: Accepted publickey for core from 10.0.0.1 port 41412 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:44.814420 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:44.842875 systemd-logind[1600]: New session 5 of user core. Oct 13 05:25:44.852560 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:25:44.982893 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:25:44.985038 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:45.010178 sudo[1782]: pam_unix(sudo:session): session closed for user root Oct 13 05:25:45.014948 sshd[1781]: Connection closed by 10.0.0.1 port 41412 Oct 13 05:25:45.019776 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:45.040025 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:41412.service: Deactivated successfully. Oct 13 05:25:45.042739 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:25:45.049143 systemd-logind[1600]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:25:45.059841 systemd-logind[1600]: Removed session 5. Oct 13 05:25:45.064012 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:41420.service - OpenSSH per-connection server daemon (10.0.0.1:41420). Oct 13 05:25:45.204250 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 41420 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:45.206168 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:45.226951 systemd-logind[1600]: New session 6 of user core. Oct 13 05:25:45.235630 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:25:45.335653 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:25:45.335989 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:45.451779 sudo[1795]: pam_unix(sudo:session): session closed for user root Oct 13 05:25:45.474686 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:25:45.475119 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:45.504597 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:25:45.519092 kubelet[1744]: E1013 05:25:45.515601 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:25:45.530407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:25:45.530685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:25:45.531586 systemd[1]: kubelet.service: Consumed 1.766s CPU time, 268.7M memory peak. Oct 13 05:25:45.592619 augenrules[1818]: No rules Oct 13 05:25:45.594757 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:25:45.595178 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:25:45.600178 sudo[1794]: pam_unix(sudo:session): session closed for user root Oct 13 05:25:45.611511 sshd[1793]: Connection closed by 10.0.0.1 port 41420 Oct 13 05:25:45.608663 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Oct 13 05:25:45.634895 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:41420.service: Deactivated successfully. Oct 13 05:25:45.655624 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:25:45.675582 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:41432.service - OpenSSH per-connection server daemon (10.0.0.1:41432). Oct 13 05:25:45.676403 systemd-logind[1600]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:25:45.679594 systemd-logind[1600]: Removed session 6. Oct 13 05:25:45.793035 sshd[1827]: Accepted publickey for core from 10.0.0.1 port 41432 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:25:45.797841 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:25:45.813645 systemd-logind[1600]: New session 7 of user core. Oct 13 05:25:45.838785 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:25:45.926445 sudo[1831]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:25:45.926848 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:25:47.571757 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:25:47.599026 (dockerd)[1851]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:25:48.297574 dockerd[1851]: time="2025-10-13T05:25:48.297435761Z" level=info msg="Starting up" Oct 13 05:25:48.299345 dockerd[1851]: time="2025-10-13T05:25:48.299069917Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:25:48.336820 dockerd[1851]: time="2025-10-13T05:25:48.336746681Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:25:48.942796 dockerd[1851]: time="2025-10-13T05:25:48.942701145Z" level=info msg="Loading containers: start." Oct 13 05:25:48.975399 kernel: Initializing XFRM netlink socket Oct 13 05:25:49.438964 systemd-networkd[1518]: docker0: Link UP Oct 13 05:25:49.454498 dockerd[1851]: time="2025-10-13T05:25:49.454412221Z" level=info msg="Loading containers: done." Oct 13 05:25:49.488668 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck83881174-merged.mount: Deactivated successfully. Oct 13 05:25:49.492416 dockerd[1851]: time="2025-10-13T05:25:49.492330158Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:25:49.492556 dockerd[1851]: time="2025-10-13T05:25:49.492463418Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:25:49.492628 dockerd[1851]: time="2025-10-13T05:25:49.492557815Z" level=info msg="Initializing buildkit" Oct 13 05:25:49.529822 dockerd[1851]: time="2025-10-13T05:25:49.529743218Z" level=info msg="Completed buildkit initialization" Oct 13 05:25:49.540338 dockerd[1851]: time="2025-10-13T05:25:49.540242475Z" level=info msg="Daemon has completed initialization" Oct 13 05:25:49.540538 dockerd[1851]: time="2025-10-13T05:25:49.540428865Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:25:49.540675 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:25:50.914331 containerd[1625]: time="2025-10-13T05:25:50.914275679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 13 05:25:52.433767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998350826.mount: Deactivated successfully. Oct 13 05:25:53.873398 containerd[1625]: time="2025-10-13T05:25:53.873301693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:53.874001 containerd[1625]: time="2025-10-13T05:25:53.873925945Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 13 05:25:53.875103 containerd[1625]: time="2025-10-13T05:25:53.875068248Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:53.877720 containerd[1625]: time="2025-10-13T05:25:53.877689796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:53.878614 containerd[1625]: time="2025-10-13T05:25:53.878582501Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.964259893s" Oct 13 05:25:53.878692 containerd[1625]: time="2025-10-13T05:25:53.878621564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 13 05:25:53.879596 containerd[1625]: time="2025-10-13T05:25:53.879328761Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 13 05:25:55.604871 containerd[1625]: time="2025-10-13T05:25:55.604807983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:55.605696 containerd[1625]: time="2025-10-13T05:25:55.605639192Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 13 05:25:55.606979 containerd[1625]: time="2025-10-13T05:25:55.606944772Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:55.609220 containerd[1625]: time="2025-10-13T05:25:55.609186838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:55.610164 containerd[1625]: time="2025-10-13T05:25:55.610121772Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.730730444s" Oct 13 05:25:55.610164 containerd[1625]: time="2025-10-13T05:25:55.610165003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 13 05:25:55.610853 containerd[1625]: time="2025-10-13T05:25:55.610650143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 13 05:25:55.756026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:25:55.758196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:25:56.038232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:25:56.043084 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:25:56.205305 kubelet[2142]: E1013 05:25:56.205224 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:25:56.212168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:25:56.212742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:25:56.213142 systemd[1]: kubelet.service: Consumed 320ms CPU time, 111.2M memory peak. Oct 13 05:25:58.523795 containerd[1625]: time="2025-10-13T05:25:58.523706785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:58.524582 containerd[1625]: time="2025-10-13T05:25:58.524527315Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 13 05:25:58.525952 containerd[1625]: time="2025-10-13T05:25:58.525900331Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:58.528686 containerd[1625]: time="2025-10-13T05:25:58.528637405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:25:58.529572 containerd[1625]: time="2025-10-13T05:25:58.529527325Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.91884418s" Oct 13 05:25:58.529636 containerd[1625]: time="2025-10-13T05:25:58.529584753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 13 05:25:58.530293 containerd[1625]: time="2025-10-13T05:25:58.530257375Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 13 05:25:59.988646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734274101.mount: Deactivated successfully. Oct 13 05:26:01.029415 containerd[1625]: time="2025-10-13T05:26:01.029310777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:01.030172 containerd[1625]: time="2025-10-13T05:26:01.030129313Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 13 05:26:01.033127 containerd[1625]: time="2025-10-13T05:26:01.033090388Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:01.035337 containerd[1625]: time="2025-10-13T05:26:01.035276239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:01.035834 containerd[1625]: time="2025-10-13T05:26:01.035799681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.505503814s" Oct 13 05:26:01.035878 containerd[1625]: time="2025-10-13T05:26:01.035836450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 13 05:26:01.036444 containerd[1625]: time="2025-10-13T05:26:01.036411539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 13 05:26:01.629423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956173321.mount: Deactivated successfully. Oct 13 05:26:03.082550 containerd[1625]: time="2025-10-13T05:26:03.082437870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:03.083794 containerd[1625]: time="2025-10-13T05:26:03.083768046Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 13 05:26:03.085526 containerd[1625]: time="2025-10-13T05:26:03.085491229Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:03.089602 containerd[1625]: time="2025-10-13T05:26:03.089509217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:03.090847 containerd[1625]: time="2025-10-13T05:26:03.090812021Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.05436219s" Oct 13 05:26:03.090919 containerd[1625]: time="2025-10-13T05:26:03.090853418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 13 05:26:03.091669 containerd[1625]: time="2025-10-13T05:26:03.091613795Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 05:26:04.007055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372973487.mount: Deactivated successfully. Oct 13 05:26:04.017198 containerd[1625]: time="2025-10-13T05:26:04.017102077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:04.018043 containerd[1625]: time="2025-10-13T05:26:04.017995203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:26:04.019365 containerd[1625]: time="2025-10-13T05:26:04.019300662Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:04.021914 containerd[1625]: time="2025-10-13T05:26:04.021858460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:26:04.022634 containerd[1625]: time="2025-10-13T05:26:04.022593349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 930.935712ms" Oct 13 05:26:04.022679 containerd[1625]: time="2025-10-13T05:26:04.022631430Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 05:26:04.023345 containerd[1625]: time="2025-10-13T05:26:04.023132531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 13 05:26:05.282065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033213113.mount: Deactivated successfully. Oct 13 05:26:06.256111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:26:06.258103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:06.508602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:06.514018 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:26:06.552989 kubelet[2275]: E1013 05:26:06.552895 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:26:06.557788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:26:06.558029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:26:06.558462 systemd[1]: kubelet.service: Consumed 234ms CPU time, 112.6M memory peak. Oct 13 05:26:08.322704 containerd[1625]: time="2025-10-13T05:26:08.322648329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:08.323689 containerd[1625]: time="2025-10-13T05:26:08.323640300Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 13 05:26:08.325171 containerd[1625]: time="2025-10-13T05:26:08.325125346Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:08.329318 containerd[1625]: time="2025-10-13T05:26:08.329271024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:08.331300 containerd[1625]: time="2025-10-13T05:26:08.331247111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.308081157s" Oct 13 05:26:08.331300 containerd[1625]: time="2025-10-13T05:26:08.331283920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 13 05:26:12.345379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:12.345608 systemd[1]: kubelet.service: Consumed 234ms CPU time, 112.6M memory peak. Oct 13 05:26:12.348108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:12.375626 systemd[1]: Reload requested from client PID 2322 ('systemctl') (unit session-7.scope)... Oct 13 05:26:12.375641 systemd[1]: Reloading... Oct 13 05:26:12.513407 zram_generator::config[2366]: No configuration found. Oct 13 05:26:12.820136 systemd[1]: Reloading finished in 444 ms. Oct 13 05:26:12.905452 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:26:12.905581 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:26:12.905994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:12.906072 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.4M memory peak. Oct 13 05:26:12.908175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:13.103929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:13.122843 (kubelet)[2413]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:26:13.173784 kubelet[2413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:26:13.173784 kubelet[2413]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:26:13.173784 kubelet[2413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:26:13.174327 kubelet[2413]: I1013 05:26:13.173850 2413 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:26:14.154931 kubelet[2413]: I1013 05:26:14.154878 2413 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:26:14.154931 kubelet[2413]: I1013 05:26:14.154913 2413 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:26:14.155219 kubelet[2413]: I1013 05:26:14.155200 2413 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:26:14.214764 kubelet[2413]: E1013 05:26:14.214703 2413 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:26:14.281344 kubelet[2413]: I1013 05:26:14.281287 2413 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:26:14.385423 kubelet[2413]: I1013 05:26:14.385382 2413 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:26:14.394686 kubelet[2413]: I1013 05:26:14.394341 2413 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:26:14.394686 kubelet[2413]: I1013 05:26:14.394645 2413 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:26:14.395237 kubelet[2413]: I1013 05:26:14.394672 2413 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:26:14.395237 kubelet[2413]: I1013 05:26:14.395239 2413 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:26:14.395237 kubelet[2413]: I1013 05:26:14.395253 2413 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:26:14.395538 kubelet[2413]: I1013 05:26:14.395438 2413 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:14.397900 kubelet[2413]: I1013 05:26:14.397871 2413 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:26:14.397900 kubelet[2413]: I1013 05:26:14.397894 2413 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:26:14.400155 kubelet[2413]: I1013 05:26:14.400121 2413 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:26:14.400155 kubelet[2413]: I1013 05:26:14.400146 2413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:26:14.416867 kubelet[2413]: E1013 05:26:14.416749 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:26:14.416867 kubelet[2413]: E1013 05:26:14.416842 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:26:14.447732 kubelet[2413]: I1013 05:26:14.447703 2413 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:26:14.448188 kubelet[2413]: I1013 05:26:14.448157 2413 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:26:14.448731 kubelet[2413]: W1013 05:26:14.448702 2413 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:26:14.451049 kubelet[2413]: I1013 05:26:14.451034 2413 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:26:14.451116 kubelet[2413]: I1013 05:26:14.451084 2413 server.go:1289] "Started kubelet" Oct 13 05:26:14.451767 kubelet[2413]: I1013 05:26:14.451590 2413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:26:14.465044 kubelet[2413]: I1013 05:26:14.465004 2413 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:26:14.465979 kubelet[2413]: I1013 05:26:14.465948 2413 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:26:14.466588 kubelet[2413]: I1013 05:26:14.466570 2413 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:26:14.466794 kubelet[2413]: I1013 05:26:14.466680 2413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:26:14.468610 kubelet[2413]: I1013 05:26:14.468557 2413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:26:14.471011 kubelet[2413]: E1013 05:26:14.470754 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:26:14.471011 kubelet[2413]: I1013 05:26:14.470813 2413 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:26:14.471011 kubelet[2413]: I1013 05:26:14.470984 2413 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:26:14.471140 kubelet[2413]: I1013 05:26:14.471057 2413 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:26:14.471554 kubelet[2413]: E1013 05:26:14.471525 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:26:14.471823 kubelet[2413]: I1013 05:26:14.471789 2413 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:26:14.471898 kubelet[2413]: I1013 05:26:14.471871 2413 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:26:14.472167 kubelet[2413]: E1013 05:26:14.470829 2413 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df5b20d7da371 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:26:14.451053425 +0000 UTC m=+1.323478247,LastTimestamp:2025-10-13 05:26:14.451053425 +0000 UTC m=+1.323478247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:26:14.472553 kubelet[2413]: E1013 05:26:14.472505 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Oct 13 05:26:14.472973 kubelet[2413]: E1013 05:26:14.472932 2413 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:26:14.473107 kubelet[2413]: I1013 05:26:14.473059 2413 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:26:14.494155 kubelet[2413]: I1013 05:26:14.494107 2413 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:26:14.494155 kubelet[2413]: I1013 05:26:14.494131 2413 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:26:14.494155 kubelet[2413]: I1013 05:26:14.494159 2413 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:14.554994 kubelet[2413]: I1013 05:26:14.554944 2413 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:26:14.556675 kubelet[2413]: I1013 05:26:14.556642 2413 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:26:14.556675 kubelet[2413]: I1013 05:26:14.556677 2413 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:26:14.556842 kubelet[2413]: I1013 05:26:14.556712 2413 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:26:14.556842 kubelet[2413]: I1013 05:26:14.556724 2413 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:26:14.556842 kubelet[2413]: E1013 05:26:14.556778 2413 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:26:14.557394 kubelet[2413]: E1013 05:26:14.557335 2413 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:26:14.571623 kubelet[2413]: E1013 05:26:14.571592 2413 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:26:14.585577 kubelet[2413]: I1013 05:26:14.585540 2413 policy_none.go:49] "None policy: Start" Oct 13 05:26:14.585658 kubelet[2413]: I1013 05:26:14.585598 2413 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:26:14.585658 kubelet[2413]: I1013 05:26:14.585624 2413 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:26:14.615712 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:26:14.627478 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:26:14.631200 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:26:14.642375 kubelet[2413]: E1013 05:26:14.642309 2413 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:26:14.642604 kubelet[2413]: I1013 05:26:14.642568 2413 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:26:14.642798 kubelet[2413]: I1013 05:26:14.642594 2413 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:26:14.642906 kubelet[2413]: I1013 05:26:14.642858 2413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:26:14.644167 kubelet[2413]: E1013 05:26:14.644142 2413 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:26:14.644243 kubelet[2413]: E1013 05:26:14.644201 2413 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:26:14.672739 kubelet[2413]: I1013 05:26:14.672618 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacfbac397e4e97a9f14b01a9140f946-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cacfbac397e4e97a9f14b01a9140f946\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:14.672739 kubelet[2413]: I1013 05:26:14.672656 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacfbac397e4e97a9f14b01a9140f946-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cacfbac397e4e97a9f14b01a9140f946\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:14.672739 kubelet[2413]: I1013 05:26:14.672677 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacfbac397e4e97a9f14b01a9140f946-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cacfbac397e4e97a9f14b01a9140f946\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:14.672996 kubelet[2413]: E1013 05:26:14.672946 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Oct 13 05:26:14.739705 systemd[1]: Created slice kubepods-burstable-podcacfbac397e4e97a9f14b01a9140f946.slice - libcontainer container kubepods-burstable-podcacfbac397e4e97a9f14b01a9140f946.slice. Oct 13 05:26:14.743952 kubelet[2413]: I1013 05:26:14.743928 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:14.744364 kubelet[2413]: E1013 05:26:14.744312 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Oct 13 05:26:14.759255 kubelet[2413]: E1013 05:26:14.759234 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:14.773806 kubelet[2413]: I1013 05:26:14.773721 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:14.773806 kubelet[2413]: I1013 05:26:14.773821 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:14.774014 kubelet[2413]: I1013 05:26:14.773843 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:14.774014 kubelet[2413]: I1013 05:26:14.773859 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:14.774014 kubelet[2413]: I1013 05:26:14.773880 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:14.789062 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 13 05:26:14.790966 kubelet[2413]: E1013 05:26:14.790924 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:14.841623 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 13 05:26:14.843439 kubelet[2413]: E1013 05:26:14.843410 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:14.874994 kubelet[2413]: I1013 05:26:14.874911 2413 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:14.946532 kubelet[2413]: I1013 05:26:14.946408 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:14.946907 kubelet[2413]: E1013 05:26:14.946867 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Oct 13 05:26:15.060021 kubelet[2413]: E1013 05:26:15.059942 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.060854 containerd[1625]: time="2025-10-13T05:26:15.060783787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cacfbac397e4e97a9f14b01a9140f946,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:15.073639 kubelet[2413]: E1013 05:26:15.073574 2413 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Oct 13 05:26:15.091809 containerd[1625]: time="2025-10-13T05:26:15.091715202Z" level=info msg="connecting to shim ff58b0cb387f9824c0253dfd953dc6d9ffc3c9e46a68d93bd32773059a50b526" address="unix:///run/containerd/s/5a727e7f07d7fcbbd09ae1dd031b51a7e180e5411946d0cdf505b95f5a19dc0e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:15.092024 kubelet[2413]: E1013 05:26:15.091827 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.092551 containerd[1625]: time="2025-10-13T05:26:15.092506871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:15.150756 kubelet[2413]: E1013 05:26:15.150704 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.151416 containerd[1625]: time="2025-10-13T05:26:15.151377312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:15.156377 containerd[1625]: time="2025-10-13T05:26:15.155707436Z" level=info msg="connecting to shim 0c1fb0ab737704e3be3a2ceb87e76052488d78b63e717c7f32b8583849dfda3b" address="unix:///run/containerd/s/3dc948814ebcf81e8b27c9c823a7d2e5df9b4f4f26016a01f095b3c171d51ae3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:15.157592 systemd[1]: Started cri-containerd-ff58b0cb387f9824c0253dfd953dc6d9ffc3c9e46a68d93bd32773059a50b526.scope - libcontainer container ff58b0cb387f9824c0253dfd953dc6d9ffc3c9e46a68d93bd32773059a50b526. Oct 13 05:26:15.185260 containerd[1625]: time="2025-10-13T05:26:15.185158041Z" level=info msg="connecting to shim cf5c6957619cb8a7cebe395c074819c1453fbbabeb982dd854ad5158bcb621fa" address="unix:///run/containerd/s/d33078b5f2699b8371e461a5e6ed43264968f895dbab0c47554f3a7fe9f25e89" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:15.198752 systemd[1]: Started cri-containerd-0c1fb0ab737704e3be3a2ceb87e76052488d78b63e717c7f32b8583849dfda3b.scope - libcontainer container 0c1fb0ab737704e3be3a2ceb87e76052488d78b63e717c7f32b8583849dfda3b. Oct 13 05:26:15.228643 systemd[1]: Started cri-containerd-cf5c6957619cb8a7cebe395c074819c1453fbbabeb982dd854ad5158bcb621fa.scope - libcontainer container cf5c6957619cb8a7cebe395c074819c1453fbbabeb982dd854ad5158bcb621fa. Oct 13 05:26:15.236915 containerd[1625]: time="2025-10-13T05:26:15.236837530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cacfbac397e4e97a9f14b01a9140f946,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff58b0cb387f9824c0253dfd953dc6d9ffc3c9e46a68d93bd32773059a50b526\"" Oct 13 05:26:15.238508 kubelet[2413]: E1013 05:26:15.238480 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.250339 containerd[1625]: time="2025-10-13T05:26:15.249179111Z" level=info msg="CreateContainer within sandbox \"ff58b0cb387f9824c0253dfd953dc6d9ffc3c9e46a68d93bd32773059a50b526\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:26:15.264079 containerd[1625]: time="2025-10-13T05:26:15.263994796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c1fb0ab737704e3be3a2ceb87e76052488d78b63e717c7f32b8583849dfda3b\"" Oct 13 05:26:15.265674 containerd[1625]: time="2025-10-13T05:26:15.265626977Z" level=info msg="Container d29d098dee7faaac1696473890b033a70788e397ed8a608068e0c64917320bca: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:15.265785 kubelet[2413]: E1013 05:26:15.265757 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.271959 containerd[1625]: time="2025-10-13T05:26:15.271915559Z" level=info msg="CreateContainer within sandbox \"0c1fb0ab737704e3be3a2ceb87e76052488d78b63e717c7f32b8583849dfda3b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:26:15.279697 containerd[1625]: time="2025-10-13T05:26:15.279622510Z" level=info msg="CreateContainer within sandbox \"ff58b0cb387f9824c0253dfd953dc6d9ffc3c9e46a68d93bd32773059a50b526\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d29d098dee7faaac1696473890b033a70788e397ed8a608068e0c64917320bca\"" Oct 13 05:26:15.281088 containerd[1625]: time="2025-10-13T05:26:15.280562482Z" level=info msg="StartContainer for \"d29d098dee7faaac1696473890b033a70788e397ed8a608068e0c64917320bca\"" Oct 13 05:26:15.284337 containerd[1625]: time="2025-10-13T05:26:15.283583577Z" level=info msg="Container 481356b5cb4e916c81a316ed85918f4ec19d0cb3c1701da08427e355904bc21c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:15.284337 containerd[1625]: time="2025-10-13T05:26:15.283768091Z" level=info msg="connecting to shim d29d098dee7faaac1696473890b033a70788e397ed8a608068e0c64917320bca" address="unix:///run/containerd/s/5a727e7f07d7fcbbd09ae1dd031b51a7e180e5411946d0cdf505b95f5a19dc0e" protocol=ttrpc version=3 Oct 13 05:26:15.292517 containerd[1625]: time="2025-10-13T05:26:15.292430404Z" level=info msg="CreateContainer within sandbox \"0c1fb0ab737704e3be3a2ceb87e76052488d78b63e717c7f32b8583849dfda3b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"481356b5cb4e916c81a316ed85918f4ec19d0cb3c1701da08427e355904bc21c\"" Oct 13 05:26:15.293165 containerd[1625]: time="2025-10-13T05:26:15.293122862Z" level=info msg="StartContainer for \"481356b5cb4e916c81a316ed85918f4ec19d0cb3c1701da08427e355904bc21c\"" Oct 13 05:26:15.294524 containerd[1625]: time="2025-10-13T05:26:15.294451691Z" level=info msg="connecting to shim 481356b5cb4e916c81a316ed85918f4ec19d0cb3c1701da08427e355904bc21c" address="unix:///run/containerd/s/3dc948814ebcf81e8b27c9c823a7d2e5df9b4f4f26016a01f095b3c171d51ae3" protocol=ttrpc version=3 Oct 13 05:26:15.315603 systemd[1]: Started cri-containerd-d29d098dee7faaac1696473890b033a70788e397ed8a608068e0c64917320bca.scope - libcontainer container d29d098dee7faaac1696473890b033a70788e397ed8a608068e0c64917320bca. Oct 13 05:26:15.329939 containerd[1625]: time="2025-10-13T05:26:15.329892625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf5c6957619cb8a7cebe395c074819c1453fbbabeb982dd854ad5158bcb621fa\"" Oct 13 05:26:15.332159 kubelet[2413]: E1013 05:26:15.332126 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.339625 containerd[1625]: time="2025-10-13T05:26:15.339575547Z" level=info msg="CreateContainer within sandbox \"cf5c6957619cb8a7cebe395c074819c1453fbbabeb982dd854ad5158bcb621fa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:26:15.341569 systemd[1]: Started cri-containerd-481356b5cb4e916c81a316ed85918f4ec19d0cb3c1701da08427e355904bc21c.scope - libcontainer container 481356b5cb4e916c81a316ed85918f4ec19d0cb3c1701da08427e355904bc21c. Oct 13 05:26:15.349251 kubelet[2413]: I1013 05:26:15.349189 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:15.349937 kubelet[2413]: E1013 05:26:15.349902 2413 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Oct 13 05:26:15.352295 containerd[1625]: time="2025-10-13T05:26:15.352240628Z" level=info msg="Container e61ac2c4bb1de58f9c408a276e2fb60ce41d4ed3bd2aff539742a0ca0431f051: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:15.362931 containerd[1625]: time="2025-10-13T05:26:15.362884262Z" level=info msg="CreateContainer within sandbox \"cf5c6957619cb8a7cebe395c074819c1453fbbabeb982dd854ad5158bcb621fa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e61ac2c4bb1de58f9c408a276e2fb60ce41d4ed3bd2aff539742a0ca0431f051\"" Oct 13 05:26:15.363746 containerd[1625]: time="2025-10-13T05:26:15.363721217Z" level=info msg="StartContainer for \"e61ac2c4bb1de58f9c408a276e2fb60ce41d4ed3bd2aff539742a0ca0431f051\"" Oct 13 05:26:15.365004 containerd[1625]: time="2025-10-13T05:26:15.364979461Z" level=info msg="connecting to shim e61ac2c4bb1de58f9c408a276e2fb60ce41d4ed3bd2aff539742a0ca0431f051" address="unix:///run/containerd/s/d33078b5f2699b8371e461a5e6ed43264968f895dbab0c47554f3a7fe9f25e89" protocol=ttrpc version=3 Oct 13 05:26:15.387503 systemd[1]: Started cri-containerd-e61ac2c4bb1de58f9c408a276e2fb60ce41d4ed3bd2aff539742a0ca0431f051.scope - libcontainer container e61ac2c4bb1de58f9c408a276e2fb60ce41d4ed3bd2aff539742a0ca0431f051. Oct 13 05:26:15.396619 containerd[1625]: time="2025-10-13T05:26:15.396579998Z" level=info msg="StartContainer for \"d29d098dee7faaac1696473890b033a70788e397ed8a608068e0c64917320bca\" returns successfully" Oct 13 05:26:15.421155 containerd[1625]: time="2025-10-13T05:26:15.421093074Z" level=info msg="StartContainer for \"481356b5cb4e916c81a316ed85918f4ec19d0cb3c1701da08427e355904bc21c\" returns successfully" Oct 13 05:26:15.488221 containerd[1625]: time="2025-10-13T05:26:15.487981022Z" level=info msg="StartContainer for \"e61ac2c4bb1de58f9c408a276e2fb60ce41d4ed3bd2aff539742a0ca0431f051\" returns successfully" Oct 13 05:26:15.565032 kubelet[2413]: E1013 05:26:15.564990 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:15.565205 kubelet[2413]: E1013 05:26:15.565118 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.569076 kubelet[2413]: E1013 05:26:15.569054 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:15.569317 kubelet[2413]: E1013 05:26:15.569295 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:15.571023 kubelet[2413]: E1013 05:26:15.570962 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:15.571428 kubelet[2413]: E1013 05:26:15.571384 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:16.152445 kubelet[2413]: I1013 05:26:16.152280 2413 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:16.660712 kubelet[2413]: E1013 05:26:16.660678 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:16.661125 kubelet[2413]: E1013 05:26:16.660810 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:16.661125 kubelet[2413]: E1013 05:26:16.661017 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:16.661125 kubelet[2413]: E1013 05:26:16.661105 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:16.661401 kubelet[2413]: E1013 05:26:16.661335 2413 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:26:16.661605 kubelet[2413]: E1013 05:26:16.661590 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:16.781794 kubelet[2413]: E1013 05:26:16.781726 2413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:26:16.868331 kubelet[2413]: I1013 05:26:16.868264 2413 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:26:16.873222 kubelet[2413]: I1013 05:26:16.873166 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:16.888734 kubelet[2413]: E1013 05:26:16.888678 2413 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:16.888734 kubelet[2413]: I1013 05:26:16.888717 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:16.892383 kubelet[2413]: E1013 05:26:16.892310 2413 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:16.892383 kubelet[2413]: I1013 05:26:16.892374 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:16.895375 kubelet[2413]: E1013 05:26:16.894676 2413 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:17.417162 kubelet[2413]: I1013 05:26:17.417063 2413 apiserver.go:52] "Watching apiserver" Oct 13 05:26:17.471611 kubelet[2413]: I1013 05:26:17.471522 2413 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:26:18.099283 kubelet[2413]: I1013 05:26:18.099209 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:18.196992 kubelet[2413]: E1013 05:26:18.196948 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:18.662667 kubelet[2413]: E1013 05:26:18.662626 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:19.207493 kubelet[2413]: I1013 05:26:19.207456 2413 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:19.212811 kubelet[2413]: E1013 05:26:19.212779 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:19.557927 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-7.scope)... Oct 13 05:26:19.557948 systemd[1]: Reloading... Oct 13 05:26:19.635384 zram_generator::config[2744]: No configuration found. Oct 13 05:26:19.664263 kubelet[2413]: E1013 05:26:19.664218 2413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:19.876538 systemd[1]: Reloading finished in 318 ms. Oct 13 05:26:19.908505 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:19.932902 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:26:19.933281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:19.933340 systemd[1]: kubelet.service: Consumed 1.101s CPU time, 131M memory peak. Oct 13 05:26:19.935344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:26:20.185953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:26:20.203944 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:26:20.252232 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:26:20.252232 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:26:20.252232 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:26:20.252775 kubelet[2786]: I1013 05:26:20.252241 2786 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:26:20.261488 kubelet[2786]: I1013 05:26:20.261450 2786 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 05:26:20.261488 kubelet[2786]: I1013 05:26:20.261472 2786 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:26:20.261652 kubelet[2786]: I1013 05:26:20.261643 2786 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:26:20.262794 kubelet[2786]: I1013 05:26:20.262762 2786 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:26:20.265048 kubelet[2786]: I1013 05:26:20.265007 2786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:26:20.272409 kubelet[2786]: I1013 05:26:20.271447 2786 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:26:20.278471 kubelet[2786]: I1013 05:26:20.278434 2786 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 05:26:20.278789 kubelet[2786]: I1013 05:26:20.278750 2786 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:26:20.278969 kubelet[2786]: I1013 05:26:20.278787 2786 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:26:20.279067 kubelet[2786]: I1013 05:26:20.278978 2786 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:26:20.279067 kubelet[2786]: I1013 05:26:20.278995 2786 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 05:26:20.279112 kubelet[2786]: I1013 05:26:20.279068 2786 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:20.279278 kubelet[2786]: I1013 05:26:20.279252 2786 kubelet.go:480] "Attempting to sync node with API server" Oct 13 05:26:20.279303 kubelet[2786]: I1013 05:26:20.279278 2786 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:26:20.279341 kubelet[2786]: I1013 05:26:20.279306 2786 kubelet.go:386] "Adding apiserver pod source" Oct 13 05:26:20.279341 kubelet[2786]: I1013 05:26:20.279324 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:26:20.280455 kubelet[2786]: I1013 05:26:20.280420 2786 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:26:20.281376 kubelet[2786]: I1013 05:26:20.281039 2786 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:26:20.288395 kubelet[2786]: I1013 05:26:20.287774 2786 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 05:26:20.288395 kubelet[2786]: I1013 05:26:20.287831 2786 server.go:1289] "Started kubelet" Oct 13 05:26:20.288395 kubelet[2786]: I1013 05:26:20.288064 2786 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:26:20.288395 kubelet[2786]: I1013 05:26:20.288128 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:26:20.289019 kubelet[2786]: I1013 05:26:20.288993 2786 server.go:317] "Adding debug handlers to kubelet server" Oct 13 05:26:20.289547 kubelet[2786]: I1013 05:26:20.288993 2786 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:26:20.291075 kubelet[2786]: I1013 05:26:20.291009 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:26:20.291518 kubelet[2786]: I1013 05:26:20.291487 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:26:20.294549 kubelet[2786]: I1013 05:26:20.294492 2786 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 05:26:20.294878 kubelet[2786]: I1013 05:26:20.294811 2786 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 05:26:20.295072 kubelet[2786]: I1013 05:26:20.295059 2786 reconciler.go:26] "Reconciler: start to sync state" Oct 13 05:26:20.295155 kubelet[2786]: I1013 05:26:20.295082 2786 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:26:20.295376 kubelet[2786]: I1013 05:26:20.295326 2786 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:26:20.296717 kubelet[2786]: E1013 05:26:20.296676 2786 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:26:20.297222 kubelet[2786]: I1013 05:26:20.297193 2786 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:26:20.316788 kubelet[2786]: I1013 05:26:20.316736 2786 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 05:26:20.319092 kubelet[2786]: I1013 05:26:20.318719 2786 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 05:26:20.319092 kubelet[2786]: I1013 05:26:20.318741 2786 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 05:26:20.319092 kubelet[2786]: I1013 05:26:20.318765 2786 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:26:20.319092 kubelet[2786]: I1013 05:26:20.318775 2786 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 05:26:20.319092 kubelet[2786]: E1013 05:26:20.318825 2786 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:26:20.337974 kubelet[2786]: I1013 05:26:20.337932 2786 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:26:20.337974 kubelet[2786]: I1013 05:26:20.337958 2786 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:26:20.337974 kubelet[2786]: I1013 05:26:20.337984 2786 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:26:20.338189 kubelet[2786]: I1013 05:26:20.338140 2786 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:26:20.338189 kubelet[2786]: I1013 05:26:20.338152 2786 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:26:20.338189 kubelet[2786]: I1013 05:26:20.338169 2786 policy_none.go:49] "None policy: Start" Oct 13 05:26:20.338189 kubelet[2786]: I1013 05:26:20.338179 2786 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 05:26:20.338189 kubelet[2786]: I1013 05:26:20.338190 2786 state_mem.go:35] "Initializing new in-memory state store" Oct 13 05:26:20.338319 kubelet[2786]: I1013 05:26:20.338276 2786 state_mem.go:75] "Updated machine memory state" Oct 13 05:26:20.342584 kubelet[2786]: E1013 05:26:20.342557 2786 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:26:20.342755 kubelet[2786]: I1013 05:26:20.342731 2786 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:26:20.342808 kubelet[2786]: I1013 05:26:20.342761 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:26:20.342919 kubelet[2786]: I1013 05:26:20.342896 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:26:20.344372 kubelet[2786]: E1013 05:26:20.343956 2786 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:26:20.422450 kubelet[2786]: I1013 05:26:20.422403 2786 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:20.422712 kubelet[2786]: I1013 05:26:20.422490 2786 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:20.422852 kubelet[2786]: I1013 05:26:20.422490 2786 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:20.449695 kubelet[2786]: I1013 05:26:20.449610 2786 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:26:20.495580 kubelet[2786]: I1013 05:26:20.495519 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacfbac397e4e97a9f14b01a9140f946-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cacfbac397e4e97a9f14b01a9140f946\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:20.495580 kubelet[2786]: I1013 05:26:20.495583 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:20.495742 kubelet[2786]: I1013 05:26:20.495609 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:20.495791 kubelet[2786]: I1013 05:26:20.495737 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacfbac397e4e97a9f14b01a9140f946-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cacfbac397e4e97a9f14b01a9140f946\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:20.495838 kubelet[2786]: I1013 05:26:20.495809 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:20.495885 kubelet[2786]: I1013 05:26:20.495845 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:20.495910 kubelet[2786]: I1013 05:26:20.495883 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:26:20.495937 kubelet[2786]: I1013 05:26:20.495909 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:20.495963 kubelet[2786]: I1013 05:26:20.495935 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacfbac397e4e97a9f14b01a9140f946-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cacfbac397e4e97a9f14b01a9140f946\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:20.610043 kubelet[2786]: E1013 05:26:20.609995 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:20.696932 kubelet[2786]: E1013 05:26:20.696879 2786 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:20.697135 kubelet[2786]: E1013 05:26:20.697103 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:20.697454 kubelet[2786]: E1013 05:26:20.696699 2786 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:26:20.697624 kubelet[2786]: E1013 05:26:20.697564 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:20.703869 kubelet[2786]: I1013 05:26:20.703723 2786 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:26:20.704473 kubelet[2786]: I1013 05:26:20.704433 2786 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:26:21.281144 kubelet[2786]: I1013 05:26:21.281104 2786 apiserver.go:52] "Watching apiserver" Oct 13 05:26:21.295802 kubelet[2786]: I1013 05:26:21.295767 2786 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 05:26:21.331764 kubelet[2786]: I1013 05:26:21.331505 2786 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:21.332013 kubelet[2786]: E1013 05:26:21.331957 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:21.332207 kubelet[2786]: E1013 05:26:21.332034 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:21.337913 kubelet[2786]: E1013 05:26:21.337834 2786 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:26:21.338117 kubelet[2786]: E1013 05:26:21.338069 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:21.363312 kubelet[2786]: I1013 05:26:21.363222 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.363193473 podStartE2EDuration="1.363193473s" podCreationTimestamp="2025-10-13 05:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:21.363138448 +0000 UTC m=+1.154154737" watchObservedRunningTime="2025-10-13 05:26:21.363193473 +0000 UTC m=+1.154209762" Oct 13 05:26:21.363546 kubelet[2786]: I1013 05:26:21.363391 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.363384156 podStartE2EDuration="3.363384156s" podCreationTimestamp="2025-10-13 05:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:21.354729784 +0000 UTC m=+1.145746063" watchObservedRunningTime="2025-10-13 05:26:21.363384156 +0000 UTC m=+1.154400445" Oct 13 05:26:21.374178 kubelet[2786]: I1013 05:26:21.374099 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.374050781 podStartE2EDuration="2.374050781s" podCreationTimestamp="2025-10-13 05:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:21.374005335 +0000 UTC m=+1.165021644" watchObservedRunningTime="2025-10-13 05:26:21.374050781 +0000 UTC m=+1.165067070" Oct 13 05:26:22.333948 kubelet[2786]: E1013 05:26:22.333897 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:22.333948 kubelet[2786]: E1013 05:26:22.333896 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:23.335610 kubelet[2786]: E1013 05:26:23.335565 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:23.467772 kubelet[2786]: E1013 05:26:23.467717 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:25.787606 kubelet[2786]: I1013 05:26:25.787567 2786 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:26:25.788132 containerd[1625]: time="2025-10-13T05:26:25.788045017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:26:25.788466 kubelet[2786]: I1013 05:26:25.788227 2786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:26:26.118108 kubelet[2786]: E1013 05:26:26.117002 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:26.148078 systemd[1]: Created slice kubepods-besteffort-pod50833440_82c8_4f28_b7b8_be4a30b1f582.slice - libcontainer container kubepods-besteffort-pod50833440_82c8_4f28_b7b8_be4a30b1f582.slice. Oct 13 05:26:26.231116 kubelet[2786]: I1013 05:26:26.231066 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50833440-82c8-4f28-b7b8-be4a30b1f582-xtables-lock\") pod \"kube-proxy-gvk5m\" (UID: \"50833440-82c8-4f28-b7b8-be4a30b1f582\") " pod="kube-system/kube-proxy-gvk5m" Oct 13 05:26:26.231116 kubelet[2786]: I1013 05:26:26.231114 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50833440-82c8-4f28-b7b8-be4a30b1f582-lib-modules\") pod \"kube-proxy-gvk5m\" (UID: \"50833440-82c8-4f28-b7b8-be4a30b1f582\") " pod="kube-system/kube-proxy-gvk5m" Oct 13 05:26:26.231305 kubelet[2786]: I1013 05:26:26.231237 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50833440-82c8-4f28-b7b8-be4a30b1f582-kube-proxy\") pod \"kube-proxy-gvk5m\" (UID: \"50833440-82c8-4f28-b7b8-be4a30b1f582\") " pod="kube-system/kube-proxy-gvk5m" Oct 13 05:26:26.231398 kubelet[2786]: I1013 05:26:26.231305 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8n4\" (UniqueName: \"kubernetes.io/projected/50833440-82c8-4f28-b7b8-be4a30b1f582-kube-api-access-ng8n4\") pod \"kube-proxy-gvk5m\" (UID: \"50833440-82c8-4f28-b7b8-be4a30b1f582\") " pod="kube-system/kube-proxy-gvk5m" Oct 13 05:26:26.302155 update_engine[1602]: I20251013 05:26:26.301999 1602 update_attempter.cc:509] Updating boot flags... Oct 13 05:26:26.340461 kubelet[2786]: E1013 05:26:26.340423 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:26.613506 kubelet[2786]: E1013 05:26:26.613455 2786 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 13 05:26:26.613506 kubelet[2786]: E1013 05:26:26.613500 2786 projected.go:194] Error preparing data for projected volume kube-api-access-ng8n4 for pod kube-system/kube-proxy-gvk5m: configmap "kube-root-ca.crt" not found Oct 13 05:26:26.613803 kubelet[2786]: E1013 05:26:26.613576 2786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50833440-82c8-4f28-b7b8-be4a30b1f582-kube-api-access-ng8n4 podName:50833440-82c8-4f28-b7b8-be4a30b1f582 nodeName:}" failed. No retries permitted until 2025-10-13 05:26:27.113553854 +0000 UTC m=+6.904570133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ng8n4" (UniqueName: "kubernetes.io/projected/50833440-82c8-4f28-b7b8-be4a30b1f582-kube-api-access-ng8n4") pod "kube-proxy-gvk5m" (UID: "50833440-82c8-4f28-b7b8-be4a30b1f582") : configmap "kube-root-ca.crt" not found Oct 13 05:26:26.786202 systemd[1]: Created slice kubepods-besteffort-pod412042bf_0a81_4880_a5cd_8e632560550c.slice - libcontainer container kubepods-besteffort-pod412042bf_0a81_4880_a5cd_8e632560550c.slice. Oct 13 05:26:26.836710 kubelet[2786]: I1013 05:26:26.836629 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/412042bf-0a81-4880-a5cd-8e632560550c-var-lib-calico\") pod \"tigera-operator-755d956888-z8qxc\" (UID: \"412042bf-0a81-4880-a5cd-8e632560550c\") " pod="tigera-operator/tigera-operator-755d956888-z8qxc" Oct 13 05:26:26.836710 kubelet[2786]: I1013 05:26:26.836724 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-494v8\" (UniqueName: \"kubernetes.io/projected/412042bf-0a81-4880-a5cd-8e632560550c-kube-api-access-494v8\") pod \"tigera-operator-755d956888-z8qxc\" (UID: \"412042bf-0a81-4880-a5cd-8e632560550c\") " pod="tigera-operator/tigera-operator-755d956888-z8qxc" Oct 13 05:26:27.095404 containerd[1625]: time="2025-10-13T05:26:27.094906256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-z8qxc,Uid:412042bf-0a81-4880-a5cd-8e632560550c,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:26:27.155387 containerd[1625]: time="2025-10-13T05:26:27.155306637Z" level=info msg="connecting to shim af1e368f1f1cf445979617f42235a62eb18b2b2f315273cd7f2588397d13661e" address="unix:///run/containerd/s/e417719795686a2c8386881018e3aa630d411b6fc8ff17032d1caa87dbe3e4db" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:27.234731 systemd[1]: Started cri-containerd-af1e368f1f1cf445979617f42235a62eb18b2b2f315273cd7f2588397d13661e.scope - libcontainer container af1e368f1f1cf445979617f42235a62eb18b2b2f315273cd7f2588397d13661e. Oct 13 05:26:27.343161 kubelet[2786]: E1013 05:26:27.343115 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:27.350436 containerd[1625]: time="2025-10-13T05:26:27.350282120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-z8qxc,Uid:412042bf-0a81-4880-a5cd-8e632560550c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"af1e368f1f1cf445979617f42235a62eb18b2b2f315273cd7f2588397d13661e\"" Oct 13 05:26:27.352144 containerd[1625]: time="2025-10-13T05:26:27.352097140Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:26:27.363841 kubelet[2786]: E1013 05:26:27.363784 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:27.364562 containerd[1625]: time="2025-10-13T05:26:27.364509136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvk5m,Uid:50833440-82c8-4f28-b7b8-be4a30b1f582,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:27.391319 containerd[1625]: time="2025-10-13T05:26:27.391179827Z" level=info msg="connecting to shim 83a0170a5f84b4e8642bb746f7f813c01a515adc5b75921d42883e8629b7dbab" address="unix:///run/containerd/s/c7ccabe7fa445f4052cc16fcc5dc17744dada6931ce2737f2f3bb4ecd1975894" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:27.419614 systemd[1]: Started cri-containerd-83a0170a5f84b4e8642bb746f7f813c01a515adc5b75921d42883e8629b7dbab.scope - libcontainer container 83a0170a5f84b4e8642bb746f7f813c01a515adc5b75921d42883e8629b7dbab. Oct 13 05:26:27.450816 containerd[1625]: time="2025-10-13T05:26:27.450750445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvk5m,Uid:50833440-82c8-4f28-b7b8-be4a30b1f582,Namespace:kube-system,Attempt:0,} returns sandbox id \"83a0170a5f84b4e8642bb746f7f813c01a515adc5b75921d42883e8629b7dbab\"" Oct 13 05:26:27.451620 kubelet[2786]: E1013 05:26:27.451574 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:27.458263 containerd[1625]: time="2025-10-13T05:26:27.458011698Z" level=info msg="CreateContainer within sandbox \"83a0170a5f84b4e8642bb746f7f813c01a515adc5b75921d42883e8629b7dbab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:26:27.473085 containerd[1625]: time="2025-10-13T05:26:27.473031396Z" level=info msg="Container fced1b54345e7d296df1957dbbc49b42734b0ce5a085e0517dfca593a605f21e: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:27.484299 containerd[1625]: time="2025-10-13T05:26:27.484232005Z" level=info msg="CreateContainer within sandbox \"83a0170a5f84b4e8642bb746f7f813c01a515adc5b75921d42883e8629b7dbab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fced1b54345e7d296df1957dbbc49b42734b0ce5a085e0517dfca593a605f21e\"" Oct 13 05:26:27.484727 containerd[1625]: time="2025-10-13T05:26:27.484701054Z" level=info msg="StartContainer for \"fced1b54345e7d296df1957dbbc49b42734b0ce5a085e0517dfca593a605f21e\"" Oct 13 05:26:27.486202 containerd[1625]: time="2025-10-13T05:26:27.486169878Z" level=info msg="connecting to shim fced1b54345e7d296df1957dbbc49b42734b0ce5a085e0517dfca593a605f21e" address="unix:///run/containerd/s/c7ccabe7fa445f4052cc16fcc5dc17744dada6931ce2737f2f3bb4ecd1975894" protocol=ttrpc version=3 Oct 13 05:26:27.510629 systemd[1]: Started cri-containerd-fced1b54345e7d296df1957dbbc49b42734b0ce5a085e0517dfca593a605f21e.scope - libcontainer container fced1b54345e7d296df1957dbbc49b42734b0ce5a085e0517dfca593a605f21e. Oct 13 05:26:27.558094 containerd[1625]: time="2025-10-13T05:26:27.558032645Z" level=info msg="StartContainer for \"fced1b54345e7d296df1957dbbc49b42734b0ce5a085e0517dfca593a605f21e\" returns successfully" Oct 13 05:26:28.348205 kubelet[2786]: E1013 05:26:28.348150 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:28.363733 kubelet[2786]: I1013 05:26:28.363597 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvk5m" podStartSLOduration=2.363569154 podStartE2EDuration="2.363569154s" podCreationTimestamp="2025-10-13 05:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:26:28.363436292 +0000 UTC m=+8.154452601" watchObservedRunningTime="2025-10-13 05:26:28.363569154 +0000 UTC m=+8.154585443" Oct 13 05:26:28.695798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522583914.mount: Deactivated successfully. Oct 13 05:26:30.435070 containerd[1625]: time="2025-10-13T05:26:30.434987050Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:30.435739 containerd[1625]: time="2025-10-13T05:26:30.435677857Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:26:30.436837 containerd[1625]: time="2025-10-13T05:26:30.436792817Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:30.439151 containerd[1625]: time="2025-10-13T05:26:30.439105823Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:30.439892 containerd[1625]: time="2025-10-13T05:26:30.439842848Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.087687388s" Oct 13 05:26:30.439892 containerd[1625]: time="2025-10-13T05:26:30.439882532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:26:30.444967 containerd[1625]: time="2025-10-13T05:26:30.444907189Z" level=info msg="CreateContainer within sandbox \"af1e368f1f1cf445979617f42235a62eb18b2b2f315273cd7f2588397d13661e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:26:30.454198 containerd[1625]: time="2025-10-13T05:26:30.454141461Z" level=info msg="Container b90880bd198f456601724b981665fa1e8aafd7078fbd8756f0d415e77e43c3af: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:30.461259 containerd[1625]: time="2025-10-13T05:26:30.461198953Z" level=info msg="CreateContainer within sandbox \"af1e368f1f1cf445979617f42235a62eb18b2b2f315273cd7f2588397d13661e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b90880bd198f456601724b981665fa1e8aafd7078fbd8756f0d415e77e43c3af\"" Oct 13 05:26:30.461712 containerd[1625]: time="2025-10-13T05:26:30.461683350Z" level=info msg="StartContainer for \"b90880bd198f456601724b981665fa1e8aafd7078fbd8756f0d415e77e43c3af\"" Oct 13 05:26:30.462814 containerd[1625]: time="2025-10-13T05:26:30.462766809Z" level=info msg="connecting to shim b90880bd198f456601724b981665fa1e8aafd7078fbd8756f0d415e77e43c3af" address="unix:///run/containerd/s/e417719795686a2c8386881018e3aa630d411b6fc8ff17032d1caa87dbe3e4db" protocol=ttrpc version=3 Oct 13 05:26:30.492919 systemd[1]: Started cri-containerd-b90880bd198f456601724b981665fa1e8aafd7078fbd8756f0d415e77e43c3af.scope - libcontainer container b90880bd198f456601724b981665fa1e8aafd7078fbd8756f0d415e77e43c3af. Oct 13 05:26:30.595768 containerd[1625]: time="2025-10-13T05:26:30.595703863Z" level=info msg="StartContainer for \"b90880bd198f456601724b981665fa1e8aafd7078fbd8756f0d415e77e43c3af\" returns successfully" Oct 13 05:26:32.309404 kubelet[2786]: E1013 05:26:32.309172 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:32.335249 kubelet[2786]: I1013 05:26:32.335136 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-z8qxc" podStartSLOduration=3.246069839 podStartE2EDuration="6.335114395s" podCreationTimestamp="2025-10-13 05:26:26 +0000 UTC" firstStartedPulling="2025-10-13 05:26:27.35174882 +0000 UTC m=+7.142765109" lastFinishedPulling="2025-10-13 05:26:30.440793376 +0000 UTC m=+10.231809665" observedRunningTime="2025-10-13 05:26:31.369415334 +0000 UTC m=+11.160431723" watchObservedRunningTime="2025-10-13 05:26:32.335114395 +0000 UTC m=+12.126130684" Oct 13 05:26:32.361636 kubelet[2786]: E1013 05:26:32.361579 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:33.473692 kubelet[2786]: E1013 05:26:33.472996 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:38.628443 sudo[1831]: pam_unix(sudo:session): session closed for user root Oct 13 05:26:38.630496 sshd[1830]: Connection closed by 10.0.0.1 port 41432 Oct 13 05:26:38.644139 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Oct 13 05:26:38.648961 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:41432.service: Deactivated successfully. Oct 13 05:26:38.651425 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:26:38.651674 systemd[1]: session-7.scope: Consumed 6.660s CPU time, 216.4M memory peak. Oct 13 05:26:38.653320 systemd-logind[1600]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:26:38.654810 systemd-logind[1600]: Removed session 7. Oct 13 05:26:42.545196 systemd[1]: Created slice kubepods-besteffort-podbf68dec9_904e_43dd_80d7_3e3784a3217b.slice - libcontainer container kubepods-besteffort-podbf68dec9_904e_43dd_80d7_3e3784a3217b.slice. Oct 13 05:26:42.639943 kubelet[2786]: I1013 05:26:42.639851 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzc4\" (UniqueName: \"kubernetes.io/projected/bf68dec9-904e-43dd-80d7-3e3784a3217b-kube-api-access-4bzc4\") pod \"calico-typha-7886f66545-r6fq2\" (UID: \"bf68dec9-904e-43dd-80d7-3e3784a3217b\") " pod="calico-system/calico-typha-7886f66545-r6fq2" Oct 13 05:26:42.639943 kubelet[2786]: I1013 05:26:42.639919 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf68dec9-904e-43dd-80d7-3e3784a3217b-tigera-ca-bundle\") pod \"calico-typha-7886f66545-r6fq2\" (UID: \"bf68dec9-904e-43dd-80d7-3e3784a3217b\") " pod="calico-system/calico-typha-7886f66545-r6fq2" Oct 13 05:26:42.639943 kubelet[2786]: I1013 05:26:42.639943 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf68dec9-904e-43dd-80d7-3e3784a3217b-typha-certs\") pod \"calico-typha-7886f66545-r6fq2\" (UID: \"bf68dec9-904e-43dd-80d7-3e3784a3217b\") " pod="calico-system/calico-typha-7886f66545-r6fq2" Oct 13 05:26:42.850737 kubelet[2786]: E1013 05:26:42.850549 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:42.852821 containerd[1625]: time="2025-10-13T05:26:42.852769380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7886f66545-r6fq2,Uid:bf68dec9-904e-43dd-80d7-3e3784a3217b,Namespace:calico-system,Attempt:0,}" Oct 13 05:26:42.911663 systemd[1]: Created slice kubepods-besteffort-pod41ef617a_b2fb_45b3_9027_9956f269c215.slice - libcontainer container kubepods-besteffort-pod41ef617a_b2fb_45b3_9027_9956f269c215.slice. Oct 13 05:26:42.918896 containerd[1625]: time="2025-10-13T05:26:42.918826612Z" level=info msg="connecting to shim eed7c9e213088f037c70b9e55ce14b4c2bc9c1ef6fc508fcd45576eff08bca6d" address="unix:///run/containerd/s/6c044c7a28865d09bc0b5fd5750676f33fc7b491bb66865860cf0726ac975051" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:42.942327 kubelet[2786]: I1013 05:26:42.942204 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41ef617a-b2fb-45b3-9027-9956f269c215-tigera-ca-bundle\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942327 kubelet[2786]: I1013 05:26:42.942280 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-cni-log-dir\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942578 kubelet[2786]: I1013 05:26:42.942388 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-policysync\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942578 kubelet[2786]: I1013 05:26:42.942432 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-xtables-lock\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942578 kubelet[2786]: I1013 05:26:42.942478 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-flexvol-driver-host\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942578 kubelet[2786]: I1013 05:26:42.942518 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-var-lib-calico\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942578 kubelet[2786]: I1013 05:26:42.942544 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-cni-net-dir\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942743 kubelet[2786]: I1013 05:26:42.942572 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-cni-bin-dir\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942743 kubelet[2786]: I1013 05:26:42.942596 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/41ef617a-b2fb-45b3-9027-9956f269c215-node-certs\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942743 kubelet[2786]: I1013 05:26:42.942622 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-lib-modules\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942743 kubelet[2786]: I1013 05:26:42.942642 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgfsm\" (UniqueName: \"kubernetes.io/projected/41ef617a-b2fb-45b3-9027-9956f269c215-kube-api-access-cgfsm\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.942743 kubelet[2786]: I1013 05:26:42.942681 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/41ef617a-b2fb-45b3-9027-9956f269c215-var-run-calico\") pod \"calico-node-rtrjl\" (UID: \"41ef617a-b2fb-45b3-9027-9956f269c215\") " pod="calico-system/calico-node-rtrjl" Oct 13 05:26:42.958576 systemd[1]: Started cri-containerd-eed7c9e213088f037c70b9e55ce14b4c2bc9c1ef6fc508fcd45576eff08bca6d.scope - libcontainer container eed7c9e213088f037c70b9e55ce14b4c2bc9c1ef6fc508fcd45576eff08bca6d. Oct 13 05:26:43.029678 kubelet[2786]: E1013 05:26:43.029041 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:43.047396 containerd[1625]: time="2025-10-13T05:26:43.045924267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7886f66545-r6fq2,Uid:bf68dec9-904e-43dd-80d7-3e3784a3217b,Namespace:calico-system,Attempt:0,} returns sandbox id \"eed7c9e213088f037c70b9e55ce14b4c2bc9c1ef6fc508fcd45576eff08bca6d\"" Oct 13 05:26:43.048100 kubelet[2786]: E1013 05:26:43.048036 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:43.051376 kubelet[2786]: E1013 05:26:43.050719 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.053615 containerd[1625]: time="2025-10-13T05:26:43.053520741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:26:43.059658 kubelet[2786]: W1013 05:26:43.059604 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.062268 kubelet[2786]: E1013 05:26:43.062216 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.062731 kubelet[2786]: E1013 05:26:43.062704 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.062781 kubelet[2786]: W1013 05:26:43.062724 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.062781 kubelet[2786]: E1013 05:26:43.062749 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.064554 kubelet[2786]: E1013 05:26:43.064517 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.064554 kubelet[2786]: W1013 05:26:43.064541 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.064697 kubelet[2786]: E1013 05:26:43.064557 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.126958 kubelet[2786]: E1013 05:26:43.126812 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.126958 kubelet[2786]: W1013 05:26:43.126839 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.126958 kubelet[2786]: E1013 05:26:43.126862 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.127143 kubelet[2786]: E1013 05:26:43.127074 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.127143 kubelet[2786]: W1013 05:26:43.127086 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.127143 kubelet[2786]: E1013 05:26:43.127097 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.127450 kubelet[2786]: E1013 05:26:43.127395 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.127450 kubelet[2786]: W1013 05:26:43.127425 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.127450 kubelet[2786]: E1013 05:26:43.127455 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.127918 kubelet[2786]: E1013 05:26:43.127900 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.127918 kubelet[2786]: W1013 05:26:43.127910 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.127918 kubelet[2786]: E1013 05:26:43.127920 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.128205 kubelet[2786]: E1013 05:26:43.128185 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.128205 kubelet[2786]: W1013 05:26:43.128201 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.128283 kubelet[2786]: E1013 05:26:43.128211 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.128470 kubelet[2786]: E1013 05:26:43.128450 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.128470 kubelet[2786]: W1013 05:26:43.128462 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.128470 kubelet[2786]: E1013 05:26:43.128473 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.128761 kubelet[2786]: E1013 05:26:43.128731 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.128761 kubelet[2786]: W1013 05:26:43.128743 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.128761 kubelet[2786]: E1013 05:26:43.128751 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.128926 kubelet[2786]: E1013 05:26:43.128909 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.128926 kubelet[2786]: W1013 05:26:43.128919 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.128926 kubelet[2786]: E1013 05:26:43.128926 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.129111 kubelet[2786]: E1013 05:26:43.129093 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.129111 kubelet[2786]: W1013 05:26:43.129104 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.129111 kubelet[2786]: E1013 05:26:43.129112 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.129280 kubelet[2786]: E1013 05:26:43.129263 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.129280 kubelet[2786]: W1013 05:26:43.129274 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.129280 kubelet[2786]: E1013 05:26:43.129282 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.129468 kubelet[2786]: E1013 05:26:43.129451 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.129468 kubelet[2786]: W1013 05:26:43.129461 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.129468 kubelet[2786]: E1013 05:26:43.129469 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.129660 kubelet[2786]: E1013 05:26:43.129628 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.129660 kubelet[2786]: W1013 05:26:43.129640 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.129660 kubelet[2786]: E1013 05:26:43.129658 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.129845 kubelet[2786]: E1013 05:26:43.129827 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.129845 kubelet[2786]: W1013 05:26:43.129839 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.129845 kubelet[2786]: E1013 05:26:43.129848 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.130018 kubelet[2786]: E1013 05:26:43.130002 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.130018 kubelet[2786]: W1013 05:26:43.130013 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.130018 kubelet[2786]: E1013 05:26:43.130021 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.130195 kubelet[2786]: E1013 05:26:43.130177 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.130195 kubelet[2786]: W1013 05:26:43.130188 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.130195 kubelet[2786]: E1013 05:26:43.130197 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.130384 kubelet[2786]: E1013 05:26:43.130349 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.130384 kubelet[2786]: W1013 05:26:43.130373 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.130384 kubelet[2786]: E1013 05:26:43.130381 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.130589 kubelet[2786]: E1013 05:26:43.130572 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.130589 kubelet[2786]: W1013 05:26:43.130583 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.130589 kubelet[2786]: E1013 05:26:43.130590 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.130770 kubelet[2786]: E1013 05:26:43.130753 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.130770 kubelet[2786]: W1013 05:26:43.130763 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.130770 kubelet[2786]: E1013 05:26:43.130771 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.130946 kubelet[2786]: E1013 05:26:43.130929 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.130946 kubelet[2786]: W1013 05:26:43.130939 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.130946 kubelet[2786]: E1013 05:26:43.130947 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.131113 kubelet[2786]: E1013 05:26:43.131097 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.131113 kubelet[2786]: W1013 05:26:43.131107 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.131177 kubelet[2786]: E1013 05:26:43.131117 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.144581 kubelet[2786]: E1013 05:26:43.144540 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.144581 kubelet[2786]: W1013 05:26:43.144560 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.144707 kubelet[2786]: E1013 05:26:43.144578 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.144707 kubelet[2786]: I1013 05:26:43.144622 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7698d12a-0689-4361-88b4-77840a78376a-varrun\") pod \"csi-node-driver-ckpkj\" (UID: \"7698d12a-0689-4361-88b4-77840a78376a\") " pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:26:43.144884 kubelet[2786]: E1013 05:26:43.144849 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.144884 kubelet[2786]: W1013 05:26:43.144863 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.144884 kubelet[2786]: E1013 05:26:43.144872 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.145015 kubelet[2786]: I1013 05:26:43.144891 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7698d12a-0689-4361-88b4-77840a78376a-kubelet-dir\") pod \"csi-node-driver-ckpkj\" (UID: \"7698d12a-0689-4361-88b4-77840a78376a\") " pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:26:43.145167 kubelet[2786]: E1013 05:26:43.145135 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.145167 kubelet[2786]: W1013 05:26:43.145153 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.145167 kubelet[2786]: E1013 05:26:43.145165 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.145387 kubelet[2786]: E1013 05:26:43.145338 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.145387 kubelet[2786]: W1013 05:26:43.145375 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.145387 kubelet[2786]: E1013 05:26:43.145384 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.145586 kubelet[2786]: E1013 05:26:43.145566 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.145586 kubelet[2786]: W1013 05:26:43.145576 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.145586 kubelet[2786]: E1013 05:26:43.145584 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.145730 kubelet[2786]: I1013 05:26:43.145613 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9wm\" (UniqueName: \"kubernetes.io/projected/7698d12a-0689-4361-88b4-77840a78376a-kube-api-access-gw9wm\") pod \"csi-node-driver-ckpkj\" (UID: \"7698d12a-0689-4361-88b4-77840a78376a\") " pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:26:43.145901 kubelet[2786]: E1013 05:26:43.145856 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.145901 kubelet[2786]: W1013 05:26:43.145871 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.145901 kubelet[2786]: E1013 05:26:43.145884 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.146096 kubelet[2786]: E1013 05:26:43.146078 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.146096 kubelet[2786]: W1013 05:26:43.146089 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.146096 kubelet[2786]: E1013 05:26:43.146097 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.146327 kubelet[2786]: E1013 05:26:43.146305 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.146327 kubelet[2786]: W1013 05:26:43.146316 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.146327 kubelet[2786]: E1013 05:26:43.146324 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.146542 kubelet[2786]: E1013 05:26:43.146521 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.146542 kubelet[2786]: W1013 05:26:43.146532 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.146542 kubelet[2786]: E1013 05:26:43.146540 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.146744 kubelet[2786]: E1013 05:26:43.146725 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.146744 kubelet[2786]: W1013 05:26:43.146736 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.146744 kubelet[2786]: E1013 05:26:43.146744 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.146850 kubelet[2786]: I1013 05:26:43.146765 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7698d12a-0689-4361-88b4-77840a78376a-registration-dir\") pod \"csi-node-driver-ckpkj\" (UID: \"7698d12a-0689-4361-88b4-77840a78376a\") " pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:26:43.147082 kubelet[2786]: E1013 05:26:43.147029 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.147082 kubelet[2786]: W1013 05:26:43.147075 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.147226 kubelet[2786]: E1013 05:26:43.147116 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.147226 kubelet[2786]: I1013 05:26:43.147181 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7698d12a-0689-4361-88b4-77840a78376a-socket-dir\") pod \"csi-node-driver-ckpkj\" (UID: \"7698d12a-0689-4361-88b4-77840a78376a\") " pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:26:43.147654 kubelet[2786]: E1013 05:26:43.147598 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.147776 kubelet[2786]: W1013 05:26:43.147656 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.147776 kubelet[2786]: E1013 05:26:43.147687 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.147954 kubelet[2786]: E1013 05:26:43.147927 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.147954 kubelet[2786]: W1013 05:26:43.147942 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.147998 kubelet[2786]: E1013 05:26:43.147954 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.148234 kubelet[2786]: E1013 05:26:43.148207 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.148234 kubelet[2786]: W1013 05:26:43.148225 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.148285 kubelet[2786]: E1013 05:26:43.148237 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.148478 kubelet[2786]: E1013 05:26:43.148460 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.148478 kubelet[2786]: W1013 05:26:43.148475 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.148522 kubelet[2786]: E1013 05:26:43.148487 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.218569 containerd[1625]: time="2025-10-13T05:26:43.218510738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rtrjl,Uid:41ef617a-b2fb-45b3-9027-9956f269c215,Namespace:calico-system,Attempt:0,}" Oct 13 05:26:43.248031 kubelet[2786]: E1013 05:26:43.247978 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.248031 kubelet[2786]: W1013 05:26:43.248010 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.248168 kubelet[2786]: E1013 05:26:43.248038 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.248325 kubelet[2786]: E1013 05:26:43.248299 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.248325 kubelet[2786]: W1013 05:26:43.248310 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.248325 kubelet[2786]: E1013 05:26:43.248319 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.248568 kubelet[2786]: E1013 05:26:43.248549 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.248568 kubelet[2786]: W1013 05:26:43.248561 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.248568 kubelet[2786]: E1013 05:26:43.248569 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.248798 kubelet[2786]: E1013 05:26:43.248781 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.248798 kubelet[2786]: W1013 05:26:43.248793 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.248876 kubelet[2786]: E1013 05:26:43.248802 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.249094 kubelet[2786]: E1013 05:26:43.249069 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.249094 kubelet[2786]: W1013 05:26:43.249082 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.249094 kubelet[2786]: E1013 05:26:43.249090 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.249454 kubelet[2786]: E1013 05:26:43.249426 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.249481 kubelet[2786]: W1013 05:26:43.249454 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.249514 kubelet[2786]: E1013 05:26:43.249480 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.249714 kubelet[2786]: E1013 05:26:43.249697 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.249714 kubelet[2786]: W1013 05:26:43.249708 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.249770 kubelet[2786]: E1013 05:26:43.249717 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.249930 kubelet[2786]: E1013 05:26:43.249912 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.249930 kubelet[2786]: W1013 05:26:43.249927 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.249991 kubelet[2786]: E1013 05:26:43.249935 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.250139 kubelet[2786]: E1013 05:26:43.250124 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.250139 kubelet[2786]: W1013 05:26:43.250134 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.250189 kubelet[2786]: E1013 05:26:43.250143 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.250342 kubelet[2786]: E1013 05:26:43.250327 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.250342 kubelet[2786]: W1013 05:26:43.250338 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.250409 kubelet[2786]: E1013 05:26:43.250347 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.250569 kubelet[2786]: E1013 05:26:43.250553 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.250569 kubelet[2786]: W1013 05:26:43.250564 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.250610 kubelet[2786]: E1013 05:26:43.250572 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.250787 kubelet[2786]: E1013 05:26:43.250770 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.250787 kubelet[2786]: W1013 05:26:43.250781 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.250839 kubelet[2786]: E1013 05:26:43.250789 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.251071 kubelet[2786]: E1013 05:26:43.251055 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.251071 kubelet[2786]: W1013 05:26:43.251067 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.251123 kubelet[2786]: E1013 05:26:43.251076 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.251280 kubelet[2786]: E1013 05:26:43.251265 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.251280 kubelet[2786]: W1013 05:26:43.251275 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.251328 kubelet[2786]: E1013 05:26:43.251283 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.251498 kubelet[2786]: E1013 05:26:43.251483 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.251498 kubelet[2786]: W1013 05:26:43.251494 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.251546 kubelet[2786]: E1013 05:26:43.251502 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.251694 kubelet[2786]: E1013 05:26:43.251680 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.251694 kubelet[2786]: W1013 05:26:43.251690 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.251748 kubelet[2786]: E1013 05:26:43.251702 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.251897 kubelet[2786]: E1013 05:26:43.251883 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.251897 kubelet[2786]: W1013 05:26:43.251893 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.251942 kubelet[2786]: E1013 05:26:43.251900 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.252089 kubelet[2786]: E1013 05:26:43.252075 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.252089 kubelet[2786]: W1013 05:26:43.252085 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.252135 kubelet[2786]: E1013 05:26:43.252093 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.252298 kubelet[2786]: E1013 05:26:43.252283 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.252298 kubelet[2786]: W1013 05:26:43.252294 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.252342 kubelet[2786]: E1013 05:26:43.252302 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.252520 kubelet[2786]: E1013 05:26:43.252505 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.252520 kubelet[2786]: W1013 05:26:43.252516 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.252571 kubelet[2786]: E1013 05:26:43.252524 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.252743 kubelet[2786]: E1013 05:26:43.252727 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.252743 kubelet[2786]: W1013 05:26:43.252738 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.252792 kubelet[2786]: E1013 05:26:43.252746 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.253065 kubelet[2786]: E1013 05:26:43.253025 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.253065 kubelet[2786]: W1013 05:26:43.253042 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.253065 kubelet[2786]: E1013 05:26:43.253053 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.253406 kubelet[2786]: E1013 05:26:43.253344 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.253406 kubelet[2786]: W1013 05:26:43.253390 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.253406 kubelet[2786]: E1013 05:26:43.253416 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.253854 kubelet[2786]: E1013 05:26:43.253823 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.253854 kubelet[2786]: W1013 05:26:43.253852 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.253854 kubelet[2786]: E1013 05:26:43.253879 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.254233 kubelet[2786]: E1013 05:26:43.254199 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.254233 kubelet[2786]: W1013 05:26:43.254212 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.254233 kubelet[2786]: E1013 05:26:43.254222 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.306942 kubelet[2786]: E1013 05:26:43.306887 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:43.306942 kubelet[2786]: W1013 05:26:43.306913 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:43.306942 kubelet[2786]: E1013 05:26:43.306936 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:43.558982 containerd[1625]: time="2025-10-13T05:26:43.558911815Z" level=info msg="connecting to shim 123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868" address="unix:///run/containerd/s/4dd1db28fa185958cc0af6fe9a9acc2cf1c1bc046c73ef58b3f24c9b6560cfc5" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:26:43.588780 systemd[1]: Started cri-containerd-123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868.scope - libcontainer container 123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868. Oct 13 05:26:43.645584 containerd[1625]: time="2025-10-13T05:26:43.645536053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rtrjl,Uid:41ef617a-b2fb-45b3-9027-9956f269c215,Namespace:calico-system,Attempt:0,} returns sandbox id \"123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868\"" Oct 13 05:26:44.320201 kubelet[2786]: E1013 05:26:44.320122 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:46.222741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2118401801.mount: Deactivated successfully. Oct 13 05:26:46.320052 kubelet[2786]: E1013 05:26:46.319918 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:46.889441 containerd[1625]: time="2025-10-13T05:26:46.889325436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:46.903589 containerd[1625]: time="2025-10-13T05:26:46.903472257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:26:46.941545 containerd[1625]: time="2025-10-13T05:26:46.941466963Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:46.956410 containerd[1625]: time="2025-10-13T05:26:46.956279085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:46.957571 containerd[1625]: time="2025-10-13T05:26:46.957299636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.903695768s" Oct 13 05:26:46.957571 containerd[1625]: time="2025-10-13T05:26:46.957341625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:26:46.959000 containerd[1625]: time="2025-10-13T05:26:46.958971963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:26:47.048802 containerd[1625]: time="2025-10-13T05:26:47.048723204Z" level=info msg="CreateContainer within sandbox \"eed7c9e213088f037c70b9e55ce14b4c2bc9c1ef6fc508fcd45576eff08bca6d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:26:47.278708 containerd[1625]: time="2025-10-13T05:26:47.278628443Z" level=info msg="Container 042a866d0905a19696409d344dc51bdf268ccabca50c44f198053bea84d564c7: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:47.292989 containerd[1625]: time="2025-10-13T05:26:47.292915292Z" level=info msg="CreateContainer within sandbox \"eed7c9e213088f037c70b9e55ce14b4c2bc9c1ef6fc508fcd45576eff08bca6d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"042a866d0905a19696409d344dc51bdf268ccabca50c44f198053bea84d564c7\"" Oct 13 05:26:47.293769 containerd[1625]: time="2025-10-13T05:26:47.293659232Z" level=info msg="StartContainer for \"042a866d0905a19696409d344dc51bdf268ccabca50c44f198053bea84d564c7\"" Oct 13 05:26:47.295001 containerd[1625]: time="2025-10-13T05:26:47.294943398Z" level=info msg="connecting to shim 042a866d0905a19696409d344dc51bdf268ccabca50c44f198053bea84d564c7" address="unix:///run/containerd/s/6c044c7a28865d09bc0b5fd5750676f33fc7b491bb66865860cf0726ac975051" protocol=ttrpc version=3 Oct 13 05:26:47.319528 systemd[1]: Started cri-containerd-042a866d0905a19696409d344dc51bdf268ccabca50c44f198053bea84d564c7.scope - libcontainer container 042a866d0905a19696409d344dc51bdf268ccabca50c44f198053bea84d564c7. Oct 13 05:26:47.489375 containerd[1625]: time="2025-10-13T05:26:47.489305618Z" level=info msg="StartContainer for \"042a866d0905a19696409d344dc51bdf268ccabca50c44f198053bea84d564c7\" returns successfully" Oct 13 05:26:48.319424 kubelet[2786]: E1013 05:26:48.319347 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:48.492219 kubelet[2786]: E1013 05:26:48.492157 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:48.566646 kubelet[2786]: E1013 05:26:48.566588 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.566646 kubelet[2786]: W1013 05:26:48.566617 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.566850 kubelet[2786]: E1013 05:26:48.566643 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.566916 kubelet[2786]: E1013 05:26:48.566897 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.566916 kubelet[2786]: W1013 05:26:48.566910 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.567009 kubelet[2786]: E1013 05:26:48.566923 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.567205 kubelet[2786]: E1013 05:26:48.567171 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.567205 kubelet[2786]: W1013 05:26:48.567186 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.567205 kubelet[2786]: E1013 05:26:48.567197 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.567477 kubelet[2786]: E1013 05:26:48.567460 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.567477 kubelet[2786]: W1013 05:26:48.567472 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.567606 kubelet[2786]: E1013 05:26:48.567483 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.567740 kubelet[2786]: E1013 05:26:48.567718 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.567740 kubelet[2786]: W1013 05:26:48.567734 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.567828 kubelet[2786]: E1013 05:26:48.567748 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.567987 kubelet[2786]: E1013 05:26:48.567956 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.567987 kubelet[2786]: W1013 05:26:48.567968 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.567987 kubelet[2786]: E1013 05:26:48.567982 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.568254 kubelet[2786]: E1013 05:26:48.568239 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.568254 kubelet[2786]: W1013 05:26:48.568250 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.568375 kubelet[2786]: E1013 05:26:48.568261 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.568592 kubelet[2786]: E1013 05:26:48.568560 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.568592 kubelet[2786]: W1013 05:26:48.568575 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.568592 kubelet[2786]: E1013 05:26:48.568586 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.568806 kubelet[2786]: E1013 05:26:48.568790 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.568806 kubelet[2786]: W1013 05:26:48.568800 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.568870 kubelet[2786]: E1013 05:26:48.568810 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.569017 kubelet[2786]: E1013 05:26:48.568999 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.569017 kubelet[2786]: W1013 05:26:48.569012 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.569072 kubelet[2786]: E1013 05:26:48.569023 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.569230 kubelet[2786]: E1013 05:26:48.569211 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.569230 kubelet[2786]: W1013 05:26:48.569226 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.569279 kubelet[2786]: E1013 05:26:48.569236 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.569522 kubelet[2786]: E1013 05:26:48.569442 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.569522 kubelet[2786]: W1013 05:26:48.569457 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.569522 kubelet[2786]: E1013 05:26:48.569469 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.569716 kubelet[2786]: E1013 05:26:48.569693 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.569716 kubelet[2786]: W1013 05:26:48.569710 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.569834 kubelet[2786]: E1013 05:26:48.569722 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.569925 kubelet[2786]: E1013 05:26:48.569906 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.569925 kubelet[2786]: W1013 05:26:48.569921 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.570027 kubelet[2786]: E1013 05:26:48.569932 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.570409 kubelet[2786]: E1013 05:26:48.570325 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.570409 kubelet[2786]: W1013 05:26:48.570342 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.570409 kubelet[2786]: E1013 05:26:48.570372 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.585702 kubelet[2786]: E1013 05:26:48.585657 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.585702 kubelet[2786]: W1013 05:26:48.585681 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.585702 kubelet[2786]: E1013 05:26:48.585705 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.585962 kubelet[2786]: E1013 05:26:48.585942 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.585962 kubelet[2786]: W1013 05:26:48.585954 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.585962 kubelet[2786]: E1013 05:26:48.585963 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.586215 kubelet[2786]: E1013 05:26:48.586186 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.586215 kubelet[2786]: W1013 05:26:48.586198 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.586215 kubelet[2786]: E1013 05:26:48.586207 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.586456 kubelet[2786]: E1013 05:26:48.586437 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.586456 kubelet[2786]: W1013 05:26:48.586449 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.586456 kubelet[2786]: E1013 05:26:48.586457 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.586682 kubelet[2786]: E1013 05:26:48.586662 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.586682 kubelet[2786]: W1013 05:26:48.586674 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.586682 kubelet[2786]: E1013 05:26:48.586682 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.586893 kubelet[2786]: E1013 05:26:48.586868 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.586893 kubelet[2786]: W1013 05:26:48.586879 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.586893 kubelet[2786]: E1013 05:26:48.586887 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.587117 kubelet[2786]: E1013 05:26:48.587099 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.587117 kubelet[2786]: W1013 05:26:48.587110 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.587117 kubelet[2786]: E1013 05:26:48.587120 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.587320 kubelet[2786]: E1013 05:26:48.587301 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.587320 kubelet[2786]: W1013 05:26:48.587314 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.587320 kubelet[2786]: E1013 05:26:48.587322 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.587555 kubelet[2786]: E1013 05:26:48.587525 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.587555 kubelet[2786]: W1013 05:26:48.587536 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.587555 kubelet[2786]: E1013 05:26:48.587555 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.587771 kubelet[2786]: E1013 05:26:48.587752 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.587771 kubelet[2786]: W1013 05:26:48.587763 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.587771 kubelet[2786]: E1013 05:26:48.587770 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.587970 kubelet[2786]: E1013 05:26:48.587951 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.587970 kubelet[2786]: W1013 05:26:48.587962 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.587970 kubelet[2786]: E1013 05:26:48.587970 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.588201 kubelet[2786]: E1013 05:26:48.588183 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.588201 kubelet[2786]: W1013 05:26:48.588196 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.588267 kubelet[2786]: E1013 05:26:48.588204 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.588619 kubelet[2786]: E1013 05:26:48.588589 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.588673 kubelet[2786]: W1013 05:26:48.588618 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.588673 kubelet[2786]: E1013 05:26:48.588641 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.588878 kubelet[2786]: E1013 05:26:48.588863 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.588878 kubelet[2786]: W1013 05:26:48.588875 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.588943 kubelet[2786]: E1013 05:26:48.588886 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.589107 kubelet[2786]: E1013 05:26:48.589091 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.589107 kubelet[2786]: W1013 05:26:48.589103 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.589174 kubelet[2786]: E1013 05:26:48.589113 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.589320 kubelet[2786]: E1013 05:26:48.589304 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.589320 kubelet[2786]: W1013 05:26:48.589317 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.589394 kubelet[2786]: E1013 05:26:48.589329 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.589619 kubelet[2786]: E1013 05:26:48.589602 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.589619 kubelet[2786]: W1013 05:26:48.589616 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.589690 kubelet[2786]: E1013 05:26:48.589627 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.590181 kubelet[2786]: E1013 05:26:48.590154 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:48.590181 kubelet[2786]: W1013 05:26:48.590169 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:48.590181 kubelet[2786]: E1013 05:26:48.590181 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:48.888398 kubelet[2786]: I1013 05:26:48.888137 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7886f66545-r6fq2" podStartSLOduration=2.980632109 podStartE2EDuration="6.888118203s" podCreationTimestamp="2025-10-13 05:26:42 +0000 UTC" firstStartedPulling="2025-10-13 05:26:43.050983474 +0000 UTC m=+22.841999763" lastFinishedPulling="2025-10-13 05:26:46.958469558 +0000 UTC m=+26.749485857" observedRunningTime="2025-10-13 05:26:48.887180309 +0000 UTC m=+28.678196598" watchObservedRunningTime="2025-10-13 05:26:48.888118203 +0000 UTC m=+28.679134492" Oct 13 05:26:49.493443 kubelet[2786]: I1013 05:26:49.493394 2786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:26:49.494051 kubelet[2786]: E1013 05:26:49.493862 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:49.578149 kubelet[2786]: E1013 05:26:49.578092 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.578149 kubelet[2786]: W1013 05:26:49.578127 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.578149 kubelet[2786]: E1013 05:26:49.578160 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.578537 kubelet[2786]: E1013 05:26:49.578504 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.578537 kubelet[2786]: W1013 05:26:49.578528 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.578608 kubelet[2786]: E1013 05:26:49.578540 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.578836 kubelet[2786]: E1013 05:26:49.578811 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.578836 kubelet[2786]: W1013 05:26:49.578826 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.578836 kubelet[2786]: E1013 05:26:49.578837 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.579216 kubelet[2786]: E1013 05:26:49.579158 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.579216 kubelet[2786]: W1013 05:26:49.579195 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.579216 kubelet[2786]: E1013 05:26:49.579230 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.579694 kubelet[2786]: E1013 05:26:49.579648 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.579694 kubelet[2786]: W1013 05:26:49.579687 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.579766 kubelet[2786]: E1013 05:26:49.579705 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.579940 kubelet[2786]: E1013 05:26:49.579923 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.579940 kubelet[2786]: W1013 05:26:49.579936 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.579987 kubelet[2786]: E1013 05:26:49.579947 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.580407 kubelet[2786]: E1013 05:26:49.580326 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.580496 kubelet[2786]: W1013 05:26:49.580403 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.580496 kubelet[2786]: E1013 05:26:49.580461 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.580890 kubelet[2786]: E1013 05:26:49.580858 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.580890 kubelet[2786]: W1013 05:26:49.580873 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.580890 kubelet[2786]: E1013 05:26:49.580884 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.581153 kubelet[2786]: E1013 05:26:49.581119 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.581153 kubelet[2786]: W1013 05:26:49.581137 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.581153 kubelet[2786]: E1013 05:26:49.581148 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.581675 kubelet[2786]: E1013 05:26:49.581651 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.581675 kubelet[2786]: W1013 05:26:49.581669 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.581761 kubelet[2786]: E1013 05:26:49.581691 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.582157 kubelet[2786]: E1013 05:26:49.582124 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.582157 kubelet[2786]: W1013 05:26:49.582138 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.582157 kubelet[2786]: E1013 05:26:49.582152 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.582457 kubelet[2786]: E1013 05:26:49.582435 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.582457 kubelet[2786]: W1013 05:26:49.582454 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.582533 kubelet[2786]: E1013 05:26:49.582466 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.582732 kubelet[2786]: E1013 05:26:49.582705 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.582732 kubelet[2786]: W1013 05:26:49.582721 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.582732 kubelet[2786]: E1013 05:26:49.582730 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.583005 kubelet[2786]: E1013 05:26:49.582986 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.583816 kubelet[2786]: W1013 05:26:49.583125 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.583816 kubelet[2786]: E1013 05:26:49.583142 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.583816 kubelet[2786]: E1013 05:26:49.583394 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.583816 kubelet[2786]: W1013 05:26:49.583442 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.583816 kubelet[2786]: E1013 05:26:49.583458 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.593259 kubelet[2786]: E1013 05:26:49.593213 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.593259 kubelet[2786]: W1013 05:26:49.593238 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.593259 kubelet[2786]: E1013 05:26:49.593262 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.593562 kubelet[2786]: E1013 05:26:49.593484 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.593562 kubelet[2786]: W1013 05:26:49.593494 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.593562 kubelet[2786]: E1013 05:26:49.593504 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.593765 kubelet[2786]: E1013 05:26:49.593732 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.593765 kubelet[2786]: W1013 05:26:49.593750 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.593765 kubelet[2786]: E1013 05:26:49.593761 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.593935 kubelet[2786]: E1013 05:26:49.593918 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.593935 kubelet[2786]: W1013 05:26:49.593928 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.593935 kubelet[2786]: E1013 05:26:49.593937 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.594124 kubelet[2786]: E1013 05:26:49.594106 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.594124 kubelet[2786]: W1013 05:26:49.594117 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.594124 kubelet[2786]: E1013 05:26:49.594124 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.594331 kubelet[2786]: E1013 05:26:49.594313 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.594331 kubelet[2786]: W1013 05:26:49.594325 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.594331 kubelet[2786]: E1013 05:26:49.594332 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.594705 kubelet[2786]: E1013 05:26:49.594684 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.594705 kubelet[2786]: W1013 05:26:49.594699 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.594772 kubelet[2786]: E1013 05:26:49.594712 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.594933 kubelet[2786]: E1013 05:26:49.594904 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.594933 kubelet[2786]: W1013 05:26:49.594918 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.594933 kubelet[2786]: E1013 05:26:49.594927 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.595108 kubelet[2786]: E1013 05:26:49.595090 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.595108 kubelet[2786]: W1013 05:26:49.595101 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.595108 kubelet[2786]: E1013 05:26:49.595109 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.595275 kubelet[2786]: E1013 05:26:49.595257 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.595275 kubelet[2786]: W1013 05:26:49.595268 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.595341 kubelet[2786]: E1013 05:26:49.595277 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.595477 kubelet[2786]: E1013 05:26:49.595459 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.595477 kubelet[2786]: W1013 05:26:49.595471 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.595477 kubelet[2786]: E1013 05:26:49.595480 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.595756 kubelet[2786]: E1013 05:26:49.595735 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.595756 kubelet[2786]: W1013 05:26:49.595751 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.595819 kubelet[2786]: E1013 05:26:49.595762 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.595977 kubelet[2786]: E1013 05:26:49.595958 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.595977 kubelet[2786]: W1013 05:26:49.595972 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.596038 kubelet[2786]: E1013 05:26:49.595983 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.596180 kubelet[2786]: E1013 05:26:49.596163 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.596180 kubelet[2786]: W1013 05:26:49.596175 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.596235 kubelet[2786]: E1013 05:26:49.596187 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.596406 kubelet[2786]: E1013 05:26:49.596387 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.596406 kubelet[2786]: W1013 05:26:49.596401 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.596484 kubelet[2786]: E1013 05:26:49.596411 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.596650 kubelet[2786]: E1013 05:26:49.596631 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.596650 kubelet[2786]: W1013 05:26:49.596645 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.596712 kubelet[2786]: E1013 05:26:49.596656 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.596949 kubelet[2786]: E1013 05:26:49.596928 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.596999 kubelet[2786]: W1013 05:26:49.596951 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.596999 kubelet[2786]: E1013 05:26:49.596965 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.597149 kubelet[2786]: E1013 05:26:49.597131 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:26:49.597149 kubelet[2786]: W1013 05:26:49.597142 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:26:49.597149 kubelet[2786]: E1013 05:26:49.597150 2786 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:26:49.718251 containerd[1625]: time="2025-10-13T05:26:49.718181190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:49.719332 containerd[1625]: time="2025-10-13T05:26:49.719289774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:26:49.722596 containerd[1625]: time="2025-10-13T05:26:49.722558493Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:49.725034 containerd[1625]: time="2025-10-13T05:26:49.724978905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:49.725645 containerd[1625]: time="2025-10-13T05:26:49.725601646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.766517653s" Oct 13 05:26:49.725645 containerd[1625]: time="2025-10-13T05:26:49.725641621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:26:49.729458 containerd[1625]: time="2025-10-13T05:26:49.729418535Z" level=info msg="CreateContainer within sandbox \"123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:26:49.740265 containerd[1625]: time="2025-10-13T05:26:49.739733173Z" level=info msg="Container 495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:49.759639 containerd[1625]: time="2025-10-13T05:26:49.759573138Z" level=info msg="CreateContainer within sandbox \"123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a\"" Oct 13 05:26:49.760581 containerd[1625]: time="2025-10-13T05:26:49.760302449Z" level=info msg="StartContainer for \"495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a\"" Oct 13 05:26:49.762111 containerd[1625]: time="2025-10-13T05:26:49.762074110Z" level=info msg="connecting to shim 495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a" address="unix:///run/containerd/s/4dd1db28fa185958cc0af6fe9a9acc2cf1c1bc046c73ef58b3f24c9b6560cfc5" protocol=ttrpc version=3 Oct 13 05:26:49.789666 systemd[1]: Started cri-containerd-495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a.scope - libcontainer container 495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a. Oct 13 05:26:49.841971 containerd[1625]: time="2025-10-13T05:26:49.841927805Z" level=info msg="StartContainer for \"495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a\" returns successfully" Oct 13 05:26:49.854527 systemd[1]: cri-containerd-495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a.scope: Deactivated successfully. Oct 13 05:26:49.856557 containerd[1625]: time="2025-10-13T05:26:49.856500011Z" level=info msg="received exit event container_id:\"495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a\" id:\"495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a\" pid:3524 exited_at:{seconds:1760333209 nanos:856003939}" Oct 13 05:26:49.856614 containerd[1625]: time="2025-10-13T05:26:49.856593477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a\" id:\"495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a\" pid:3524 exited_at:{seconds:1760333209 nanos:856003939}" Oct 13 05:26:49.882180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-495d862144d0dd63cfa7bff1607384979b3f9ce7fec9b9e2a2e4e907e5ce4c8a-rootfs.mount: Deactivated successfully. Oct 13 05:26:50.319205 kubelet[2786]: E1013 05:26:50.319149 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:51.502224 containerd[1625]: time="2025-10-13T05:26:51.502176322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:26:52.319682 kubelet[2786]: E1013 05:26:52.319639 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:54.320029 kubelet[2786]: E1013 05:26:54.319622 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:56.320147 kubelet[2786]: E1013 05:26:56.320097 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:56.503670 containerd[1625]: time="2025-10-13T05:26:56.503610188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:56.504480 containerd[1625]: time="2025-10-13T05:26:56.504424288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:26:56.505962 containerd[1625]: time="2025-10-13T05:26:56.505906753Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:56.508176 containerd[1625]: time="2025-10-13T05:26:56.508143686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:26:56.508801 containerd[1625]: time="2025-10-13T05:26:56.508772829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.006557774s" Oct 13 05:26:56.508861 containerd[1625]: time="2025-10-13T05:26:56.508806392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:26:56.514290 containerd[1625]: time="2025-10-13T05:26:56.514235413Z" level=info msg="CreateContainer within sandbox \"123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:26:56.526212 containerd[1625]: time="2025-10-13T05:26:56.526166182Z" level=info msg="Container f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:26:56.536245 containerd[1625]: time="2025-10-13T05:26:56.536191730Z" level=info msg="CreateContainer within sandbox \"123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03\"" Oct 13 05:26:56.536864 containerd[1625]: time="2025-10-13T05:26:56.536816935Z" level=info msg="StartContainer for \"f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03\"" Oct 13 05:26:56.538418 containerd[1625]: time="2025-10-13T05:26:56.538338724Z" level=info msg="connecting to shim f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03" address="unix:///run/containerd/s/4dd1db28fa185958cc0af6fe9a9acc2cf1c1bc046c73ef58b3f24c9b6560cfc5" protocol=ttrpc version=3 Oct 13 05:26:56.560535 systemd[1]: Started cri-containerd-f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03.scope - libcontainer container f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03. Oct 13 05:26:56.610041 containerd[1625]: time="2025-10-13T05:26:56.609918045Z" level=info msg="StartContainer for \"f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03\" returns successfully" Oct 13 05:26:58.293460 systemd[1]: cri-containerd-f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03.scope: Deactivated successfully. Oct 13 05:26:58.294171 systemd[1]: cri-containerd-f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03.scope: Consumed 658ms CPU time, 173.7M memory peak, 4M read from disk, 171.3M written to disk. Oct 13 05:26:58.294616 containerd[1625]: time="2025-10-13T05:26:58.294106485Z" level=info msg="received exit event container_id:\"f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03\" id:\"f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03\" pid:3584 exited_at:{seconds:1760333218 nanos:293809217}" Oct 13 05:26:58.295833 containerd[1625]: time="2025-10-13T05:26:58.295794416Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03\" id:\"f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03\" pid:3584 exited_at:{seconds:1760333218 nanos:293809217}" Oct 13 05:26:58.320240 kubelet[2786]: E1013 05:26:58.319861 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:26:58.327058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f45fcbd22c9cfa6cab886e27261e2d1503fb81021ceb337fd3ca8cffc8ab4f03-rootfs.mount: Deactivated successfully. Oct 13 05:26:58.367705 kubelet[2786]: I1013 05:26:58.367626 2786 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 05:26:58.740546 systemd[1]: Created slice kubepods-burstable-pod518ce9cd_e9e5_4f27_890c_d2fae6ff98e3.slice - libcontainer container kubepods-burstable-pod518ce9cd_e9e5_4f27_890c_d2fae6ff98e3.slice. Oct 13 05:26:58.748427 kubelet[2786]: I1013 05:26:58.748386 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/518ce9cd-e9e5-4f27-890c-d2fae6ff98e3-config-volume\") pod \"coredns-674b8bbfcf-h9w82\" (UID: \"518ce9cd-e9e5-4f27-890c-d2fae6ff98e3\") " pod="kube-system/coredns-674b8bbfcf-h9w82" Oct 13 05:26:58.748634 kubelet[2786]: I1013 05:26:58.748448 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27j2t\" (UniqueName: \"kubernetes.io/projected/518ce9cd-e9e5-4f27-890c-d2fae6ff98e3-kube-api-access-27j2t\") pod \"coredns-674b8bbfcf-h9w82\" (UID: \"518ce9cd-e9e5-4f27-890c-d2fae6ff98e3\") " pod="kube-system/coredns-674b8bbfcf-h9w82" Oct 13 05:26:58.758116 systemd[1]: Created slice kubepods-besteffort-pod581c5944_bb01_4f41_be01_d38804d92d32.slice - libcontainer container kubepods-besteffort-pod581c5944_bb01_4f41_be01_d38804d92d32.slice. Oct 13 05:26:58.777420 systemd[1]: Created slice kubepods-besteffort-pod7a825ede_be6e_47cb_aeed_0d14204409ce.slice - libcontainer container kubepods-besteffort-pod7a825ede_be6e_47cb_aeed_0d14204409ce.slice. Oct 13 05:26:58.786451 systemd[1]: Created slice kubepods-besteffort-pod0243a395_1b48_4301_9350_f25044be2770.slice - libcontainer container kubepods-besteffort-pod0243a395_1b48_4301_9350_f25044be2770.slice. Oct 13 05:26:58.795414 systemd[1]: Created slice kubepods-burstable-poda0feb9a5_18ab_4324_b59c_f36e6436fc38.slice - libcontainer container kubepods-burstable-poda0feb9a5_18ab_4324_b59c_f36e6436fc38.slice. Oct 13 05:26:58.801962 systemd[1]: Created slice kubepods-besteffort-pod496214e4_ed29_4a93_b78d_1c399cc61c6b.slice - libcontainer container kubepods-besteffort-pod496214e4_ed29_4a93_b78d_1c399cc61c6b.slice. Oct 13 05:26:58.808443 systemd[1]: Created slice kubepods-besteffort-podca8ebd82_7860_48a6_ad85_24113129edfb.slice - libcontainer container kubepods-besteffort-podca8ebd82_7860_48a6_ad85_24113129edfb.slice. Oct 13 05:26:58.849821 kubelet[2786]: I1013 05:26:58.849655 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnn6g\" (UniqueName: \"kubernetes.io/projected/581c5944-bb01-4f41-be01-d38804d92d32-kube-api-access-nnn6g\") pod \"calico-apiserver-5db6d8fb56-58p6w\" (UID: \"581c5944-bb01-4f41-be01-d38804d92d32\") " pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" Oct 13 05:26:58.849821 kubelet[2786]: I1013 05:26:58.849820 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0243a395-1b48-4301-9350-f25044be2770-calico-apiserver-certs\") pod \"calico-apiserver-5db6d8fb56-fz9kt\" (UID: \"0243a395-1b48-4301-9350-f25044be2770\") " pod="calico-apiserver/calico-apiserver-5db6d8fb56-fz9kt" Oct 13 05:26:58.850075 kubelet[2786]: I1013 05:26:58.849847 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fpr5\" (UniqueName: \"kubernetes.io/projected/0243a395-1b48-4301-9350-f25044be2770-kube-api-access-4fpr5\") pod \"calico-apiserver-5db6d8fb56-fz9kt\" (UID: \"0243a395-1b48-4301-9350-f25044be2770\") " pod="calico-apiserver/calico-apiserver-5db6d8fb56-fz9kt" Oct 13 05:26:58.850075 kubelet[2786]: I1013 05:26:58.849867 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-ca-bundle\") pod \"whisker-976bd986c-hsx55\" (UID: \"7a825ede-be6e-47cb-aeed-0d14204409ce\") " pod="calico-system/whisker-976bd986c-hsx55" Oct 13 05:26:58.850075 kubelet[2786]: I1013 05:26:58.849890 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0feb9a5-18ab-4324-b59c-f36e6436fc38-config-volume\") pod \"coredns-674b8bbfcf-n8c44\" (UID: \"a0feb9a5-18ab-4324-b59c-f36e6436fc38\") " pod="kube-system/coredns-674b8bbfcf-n8c44" Oct 13 05:26:58.850075 kubelet[2786]: I1013 05:26:58.849928 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/581c5944-bb01-4f41-be01-d38804d92d32-calico-apiserver-certs\") pod \"calico-apiserver-5db6d8fb56-58p6w\" (UID: \"581c5944-bb01-4f41-be01-d38804d92d32\") " pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" Oct 13 05:26:58.850075 kubelet[2786]: I1013 05:26:58.849975 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-backend-key-pair\") pod \"whisker-976bd986c-hsx55\" (UID: \"7a825ede-be6e-47cb-aeed-0d14204409ce\") " pod="calico-system/whisker-976bd986c-hsx55" Oct 13 05:26:58.850346 kubelet[2786]: I1013 05:26:58.849998 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/496214e4-ed29-4a93-b78d-1c399cc61c6b-tigera-ca-bundle\") pod \"calico-kube-controllers-75f997d669-22b4j\" (UID: \"496214e4-ed29-4a93-b78d-1c399cc61c6b\") " pod="calico-system/calico-kube-controllers-75f997d669-22b4j" Oct 13 05:26:58.850346 kubelet[2786]: I1013 05:26:58.850019 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca8ebd82-7860-48a6-ad85-24113129edfb-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-57bv6\" (UID: \"ca8ebd82-7860-48a6-ad85-24113129edfb\") " pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:26:58.850346 kubelet[2786]: I1013 05:26:58.850036 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btjxd\" (UniqueName: \"kubernetes.io/projected/7a825ede-be6e-47cb-aeed-0d14204409ce-kube-api-access-btjxd\") pod \"whisker-976bd986c-hsx55\" (UID: \"7a825ede-be6e-47cb-aeed-0d14204409ce\") " pod="calico-system/whisker-976bd986c-hsx55" Oct 13 05:26:58.850346 kubelet[2786]: I1013 05:26:58.850059 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j4t7\" (UniqueName: \"kubernetes.io/projected/496214e4-ed29-4a93-b78d-1c399cc61c6b-kube-api-access-8j4t7\") pod \"calico-kube-controllers-75f997d669-22b4j\" (UID: \"496214e4-ed29-4a93-b78d-1c399cc61c6b\") " pod="calico-system/calico-kube-controllers-75f997d669-22b4j" Oct 13 05:26:58.850346 kubelet[2786]: I1013 05:26:58.850084 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2d55\" (UniqueName: \"kubernetes.io/projected/ca8ebd82-7860-48a6-ad85-24113129edfb-kube-api-access-n2d55\") pod \"goldmane-54d579b49d-57bv6\" (UID: \"ca8ebd82-7860-48a6-ad85-24113129edfb\") " pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:26:58.850914 kubelet[2786]: I1013 05:26:58.850208 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca8ebd82-7860-48a6-ad85-24113129edfb-config\") pod \"goldmane-54d579b49d-57bv6\" (UID: \"ca8ebd82-7860-48a6-ad85-24113129edfb\") " pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:26:58.850914 kubelet[2786]: I1013 05:26:58.850238 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ca8ebd82-7860-48a6-ad85-24113129edfb-goldmane-key-pair\") pod \"goldmane-54d579b49d-57bv6\" (UID: \"ca8ebd82-7860-48a6-ad85-24113129edfb\") " pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:26:58.850914 kubelet[2786]: I1013 05:26:58.850276 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6qsn\" (UniqueName: \"kubernetes.io/projected/a0feb9a5-18ab-4324-b59c-f36e6436fc38-kube-api-access-z6qsn\") pod \"coredns-674b8bbfcf-n8c44\" (UID: \"a0feb9a5-18ab-4324-b59c-f36e6436fc38\") " pod="kube-system/coredns-674b8bbfcf-n8c44" Oct 13 05:26:59.046769 kubelet[2786]: E1013 05:26:59.046690 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:59.047799 containerd[1625]: time="2025-10-13T05:26:59.047734805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9w82,Uid:518ce9cd-e9e5-4f27-890c-d2fae6ff98e3,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:59.062643 containerd[1625]: time="2025-10-13T05:26:59.062576896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-58p6w,Uid:581c5944-bb01-4f41-be01-d38804d92d32,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:26:59.084883 containerd[1625]: time="2025-10-13T05:26:59.084811214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-976bd986c-hsx55,Uid:7a825ede-be6e-47cb-aeed-0d14204409ce,Namespace:calico-system,Attempt:0,}" Oct 13 05:26:59.098964 kubelet[2786]: E1013 05:26:59.098615 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:26:59.103205 containerd[1625]: time="2025-10-13T05:26:59.103151387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n8c44,Uid:a0feb9a5-18ab-4324-b59c-f36e6436fc38,Namespace:kube-system,Attempt:0,}" Oct 13 05:26:59.103507 containerd[1625]: time="2025-10-13T05:26:59.103428658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-fz9kt,Uid:0243a395-1b48-4301-9350-f25044be2770,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:26:59.111189 containerd[1625]: time="2025-10-13T05:26:59.111093706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f997d669-22b4j,Uid:496214e4-ed29-4a93-b78d-1c399cc61c6b,Namespace:calico-system,Attempt:0,}" Oct 13 05:26:59.114667 containerd[1625]: time="2025-10-13T05:26:59.114580306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-57bv6,Uid:ca8ebd82-7860-48a6-ad85-24113129edfb,Namespace:calico-system,Attempt:0,}" Oct 13 05:26:59.297557 containerd[1625]: time="2025-10-13T05:26:59.297378195Z" level=error msg="Failed to destroy network for sandbox \"f3a5658348d26e5666eac915d57c4aef16d0e386c71c202fe9c986a372898b26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.298123 containerd[1625]: time="2025-10-13T05:26:59.297622655Z" level=error msg="Failed to destroy network for sandbox \"f6217197cbe5a0883f2248b189585fe7aed2642aa90a72849602bd3c19d74d5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.299022 containerd[1625]: time="2025-10-13T05:26:59.297335134Z" level=error msg="Failed to destroy network for sandbox \"e90556b1705dea157f0707b8b995c8cd77d4850aceb0c4d80d881530b30c71c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.299122 containerd[1625]: time="2025-10-13T05:26:59.298641097Z" level=error msg="Failed to destroy network for sandbox \"d2177f9e39ad061da8f50a662781e6076450e9eed206b241dd58fd4378a2f8fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.299327 containerd[1625]: time="2025-10-13T05:26:59.298973292Z" level=error msg="Failed to destroy network for sandbox \"25730027792bbf7fb5679b6ddf9ce332fe6a5d88c2a263a6b5696e082d680e87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.300409 containerd[1625]: time="2025-10-13T05:26:59.300344778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9w82,Uid:518ce9cd-e9e5-4f27-890c-d2fae6ff98e3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a5658348d26e5666eac915d57c4aef16d0e386c71c202fe9c986a372898b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.301061 kubelet[2786]: E1013 05:26:59.300982 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a5658348d26e5666eac915d57c4aef16d0e386c71c202fe9c986a372898b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.301878 kubelet[2786]: E1013 05:26:59.301779 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a5658348d26e5666eac915d57c4aef16d0e386c71c202fe9c986a372898b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h9w82" Oct 13 05:26:59.301878 kubelet[2786]: E1013 05:26:59.301839 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a5658348d26e5666eac915d57c4aef16d0e386c71c202fe9c986a372898b26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h9w82" Oct 13 05:26:59.301971 kubelet[2786]: E1013 05:26:59.301920 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h9w82_kube-system(518ce9cd-e9e5-4f27-890c-d2fae6ff98e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h9w82_kube-system(518ce9cd-e9e5-4f27-890c-d2fae6ff98e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3a5658348d26e5666eac915d57c4aef16d0e386c71c202fe9c986a372898b26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h9w82" podUID="518ce9cd-e9e5-4f27-890c-d2fae6ff98e3" Oct 13 05:26:59.307007 containerd[1625]: time="2025-10-13T05:26:59.306811967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-fz9kt,Uid:0243a395-1b48-4301-9350-f25044be2770,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6217197cbe5a0883f2248b189585fe7aed2642aa90a72849602bd3c19d74d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.307248 kubelet[2786]: E1013 05:26:59.307215 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6217197cbe5a0883f2248b189585fe7aed2642aa90a72849602bd3c19d74d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.307298 kubelet[2786]: E1013 05:26:59.307267 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6217197cbe5a0883f2248b189585fe7aed2642aa90a72849602bd3c19d74d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db6d8fb56-fz9kt" Oct 13 05:26:59.307298 kubelet[2786]: E1013 05:26:59.307287 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6217197cbe5a0883f2248b189585fe7aed2642aa90a72849602bd3c19d74d5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db6d8fb56-fz9kt" Oct 13 05:26:59.307546 kubelet[2786]: E1013 05:26:59.307345 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5db6d8fb56-fz9kt_calico-apiserver(0243a395-1b48-4301-9350-f25044be2770)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5db6d8fb56-fz9kt_calico-apiserver(0243a395-1b48-4301-9350-f25044be2770)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6217197cbe5a0883f2248b189585fe7aed2642aa90a72849602bd3c19d74d5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5db6d8fb56-fz9kt" podUID="0243a395-1b48-4301-9350-f25044be2770" Oct 13 05:26:59.309597 containerd[1625]: time="2025-10-13T05:26:59.308866596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-976bd986c-hsx55,Uid:7a825ede-be6e-47cb-aeed-0d14204409ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2177f9e39ad061da8f50a662781e6076450e9eed206b241dd58fd4378a2f8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.310837 kubelet[2786]: E1013 05:26:59.309852 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2177f9e39ad061da8f50a662781e6076450e9eed206b241dd58fd4378a2f8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.310837 kubelet[2786]: E1013 05:26:59.309951 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2177f9e39ad061da8f50a662781e6076450e9eed206b241dd58fd4378a2f8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-976bd986c-hsx55" Oct 13 05:26:59.310837 kubelet[2786]: E1013 05:26:59.309984 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2177f9e39ad061da8f50a662781e6076450e9eed206b241dd58fd4378a2f8fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-976bd986c-hsx55" Oct 13 05:26:59.310978 kubelet[2786]: E1013 05:26:59.310053 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-976bd986c-hsx55_calico-system(7a825ede-be6e-47cb-aeed-0d14204409ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-976bd986c-hsx55_calico-system(7a825ede-be6e-47cb-aeed-0d14204409ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2177f9e39ad061da8f50a662781e6076450e9eed206b241dd58fd4378a2f8fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-976bd986c-hsx55" podUID="7a825ede-be6e-47cb-aeed-0d14204409ce" Oct 13 05:26:59.311291 containerd[1625]: time="2025-10-13T05:26:59.311126853Z" level=error msg="Failed to destroy network for sandbox \"00dafe002f3d88db54f24d1b6ec37608b89f0cc1c702a229b00c99cf4bc2b984\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.317795 containerd[1625]: time="2025-10-13T05:26:59.317731970Z" level=error msg="Failed to destroy network for sandbox \"1c7424ff5733632032b5e4c3b7373071f23298696a5dac7b73556a744f39fb6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.321808 containerd[1625]: time="2025-10-13T05:26:59.321698482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-58p6w,Uid:581c5944-bb01-4f41-be01-d38804d92d32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25730027792bbf7fb5679b6ddf9ce332fe6a5d88c2a263a6b5696e082d680e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.322191 kubelet[2786]: E1013 05:26:59.322074 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25730027792bbf7fb5679b6ddf9ce332fe6a5d88c2a263a6b5696e082d680e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.322191 kubelet[2786]: E1013 05:26:59.322157 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25730027792bbf7fb5679b6ddf9ce332fe6a5d88c2a263a6b5696e082d680e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" Oct 13 05:26:59.322191 kubelet[2786]: E1013 05:26:59.322184 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25730027792bbf7fb5679b6ddf9ce332fe6a5d88c2a263a6b5696e082d680e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" Oct 13 05:26:59.323468 kubelet[2786]: E1013 05:26:59.322262 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5db6d8fb56-58p6w_calico-apiserver(581c5944-bb01-4f41-be01-d38804d92d32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5db6d8fb56-58p6w_calico-apiserver(581c5944-bb01-4f41-be01-d38804d92d32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25730027792bbf7fb5679b6ddf9ce332fe6a5d88c2a263a6b5696e082d680e87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" podUID="581c5944-bb01-4f41-be01-d38804d92d32" Oct 13 05:26:59.329376 containerd[1625]: time="2025-10-13T05:26:59.327034265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n8c44,Uid:a0feb9a5-18ab-4324-b59c-f36e6436fc38,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e90556b1705dea157f0707b8b995c8cd77d4850aceb0c4d80d881530b30c71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.329376 containerd[1625]: time="2025-10-13T05:26:59.328318457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-57bv6,Uid:ca8ebd82-7860-48a6-ad85-24113129edfb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00dafe002f3d88db54f24d1b6ec37608b89f0cc1c702a229b00c99cf4bc2b984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.330071 kubelet[2786]: E1013 05:26:59.329992 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e90556b1705dea157f0707b8b995c8cd77d4850aceb0c4d80d881530b30c71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.330295 kubelet[2786]: E1013 05:26:59.330211 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e90556b1705dea157f0707b8b995c8cd77d4850aceb0c4d80d881530b30c71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n8c44" Oct 13 05:26:59.330393 kubelet[2786]: E1013 05:26:59.330322 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e90556b1705dea157f0707b8b995c8cd77d4850aceb0c4d80d881530b30c71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n8c44" Oct 13 05:26:59.330459 kubelet[2786]: E1013 05:26:59.330402 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n8c44_kube-system(a0feb9a5-18ab-4324-b59c-f36e6436fc38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n8c44_kube-system(a0feb9a5-18ab-4324-b59c-f36e6436fc38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e90556b1705dea157f0707b8b995c8cd77d4850aceb0c4d80d881530b30c71c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n8c44" podUID="a0feb9a5-18ab-4324-b59c-f36e6436fc38" Oct 13 05:26:59.330593 kubelet[2786]: E1013 05:26:59.330254 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00dafe002f3d88db54f24d1b6ec37608b89f0cc1c702a229b00c99cf4bc2b984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.330593 kubelet[2786]: E1013 05:26:59.330488 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00dafe002f3d88db54f24d1b6ec37608b89f0cc1c702a229b00c99cf4bc2b984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:26:59.330593 kubelet[2786]: E1013 05:26:59.330514 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00dafe002f3d88db54f24d1b6ec37608b89f0cc1c702a229b00c99cf4bc2b984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:26:59.330698 kubelet[2786]: E1013 05:26:59.330550 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-57bv6_calico-system(ca8ebd82-7860-48a6-ad85-24113129edfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-57bv6_calico-system(ca8ebd82-7860-48a6-ad85-24113129edfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00dafe002f3d88db54f24d1b6ec37608b89f0cc1c702a229b00c99cf4bc2b984\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-57bv6" podUID="ca8ebd82-7860-48a6-ad85-24113129edfb" Oct 13 05:26:59.332099 containerd[1625]: time="2025-10-13T05:26:59.332011254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f997d669-22b4j,Uid:496214e4-ed29-4a93-b78d-1c399cc61c6b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7424ff5733632032b5e4c3b7373071f23298696a5dac7b73556a744f39fb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.334683 kubelet[2786]: E1013 05:26:59.334597 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7424ff5733632032b5e4c3b7373071f23298696a5dac7b73556a744f39fb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:26:59.334683 kubelet[2786]: E1013 05:26:59.334673 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7424ff5733632032b5e4c3b7373071f23298696a5dac7b73556a744f39fb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f997d669-22b4j" Oct 13 05:26:59.334683 kubelet[2786]: E1013 05:26:59.334703 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c7424ff5733632032b5e4c3b7373071f23298696a5dac7b73556a744f39fb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f997d669-22b4j" Oct 13 05:26:59.335044 kubelet[2786]: E1013 05:26:59.334767 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75f997d669-22b4j_calico-system(496214e4-ed29-4a93-b78d-1c399cc61c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75f997d669-22b4j_calico-system(496214e4-ed29-4a93-b78d-1c399cc61c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c7424ff5733632032b5e4c3b7373071f23298696a5dac7b73556a744f39fb6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f997d669-22b4j" podUID="496214e4-ed29-4a93-b78d-1c399cc61c6b" Oct 13 05:26:59.531778 containerd[1625]: time="2025-10-13T05:26:59.530512002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:27:00.329218 systemd[1]: Created slice kubepods-besteffort-pod7698d12a_0689_4361_88b4_77840a78376a.slice - libcontainer container kubepods-besteffort-pod7698d12a_0689_4361_88b4_77840a78376a.slice. Oct 13 05:27:00.332675 containerd[1625]: time="2025-10-13T05:27:00.332620166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckpkj,Uid:7698d12a-0689-4361-88b4-77840a78376a,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:00.397254 containerd[1625]: time="2025-10-13T05:27:00.397180355Z" level=error msg="Failed to destroy network for sandbox \"1e111ef6136ff4d2d69fbbb333fad5746fc7496ac75e335ec0de013d96c57644\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:00.399750 systemd[1]: run-netns-cni\x2d22ddb913\x2dd2fe\x2d6c79\x2de2e8\x2d76090f4596e1.mount: Deactivated successfully. Oct 13 05:27:00.401273 containerd[1625]: time="2025-10-13T05:27:00.401230632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckpkj,Uid:7698d12a-0689-4361-88b4-77840a78376a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e111ef6136ff4d2d69fbbb333fad5746fc7496ac75e335ec0de013d96c57644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:00.401648 kubelet[2786]: E1013 05:27:00.401590 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e111ef6136ff4d2d69fbbb333fad5746fc7496ac75e335ec0de013d96c57644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:00.402045 kubelet[2786]: E1013 05:27:00.401684 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e111ef6136ff4d2d69fbbb333fad5746fc7496ac75e335ec0de013d96c57644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:27:00.402045 kubelet[2786]: E1013 05:27:00.401713 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e111ef6136ff4d2d69fbbb333fad5746fc7496ac75e335ec0de013d96c57644\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:27:00.402045 kubelet[2786]: E1013 05:27:00.401791 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ckpkj_calico-system(7698d12a-0689-4361-88b4-77840a78376a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ckpkj_calico-system(7698d12a-0689-4361-88b4-77840a78376a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e111ef6136ff4d2d69fbbb333fad5746fc7496ac75e335ec0de013d96c57644\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:27:02.091941 kubelet[2786]: I1013 05:27:02.091849 2786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:27:02.094135 kubelet[2786]: E1013 05:27:02.094094 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:02.535425 kubelet[2786]: E1013 05:27:02.535371 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:07.911724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262451319.mount: Deactivated successfully. Oct 13 05:27:10.366620 containerd[1625]: time="2025-10-13T05:27:10.366487246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-976bd986c-hsx55,Uid:7a825ede-be6e-47cb-aeed-0d14204409ce,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:11.320338 containerd[1625]: time="2025-10-13T05:27:11.320280014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f997d669-22b4j,Uid:496214e4-ed29-4a93-b78d-1c399cc61c6b,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:11.320531 containerd[1625]: time="2025-10-13T05:27:11.320280245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-58p6w,Uid:581c5944-bb01-4f41-be01-d38804d92d32,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:11.320531 containerd[1625]: time="2025-10-13T05:27:11.320285494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-57bv6,Uid:ca8ebd82-7860-48a6-ad85-24113129edfb,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:11.444339 containerd[1625]: time="2025-10-13T05:27:11.444262023Z" level=error msg="Failed to destroy network for sandbox \"0a64742cbe47fe65ef34e80f020843f53798ef62ea75bd7d2bf86d6749375eb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:11.446570 systemd[1]: run-netns-cni\x2d0233c698\x2d4ac8\x2dfe95\x2dd97b\x2de1c49ec55b1d.mount: Deactivated successfully. Oct 13 05:27:11.797463 containerd[1625]: time="2025-10-13T05:27:11.797384985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-976bd986c-hsx55,Uid:7a825ede-be6e-47cb-aeed-0d14204409ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a64742cbe47fe65ef34e80f020843f53798ef62ea75bd7d2bf86d6749375eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:11.797704 kubelet[2786]: E1013 05:27:11.797670 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a64742cbe47fe65ef34e80f020843f53798ef62ea75bd7d2bf86d6749375eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:11.798167 kubelet[2786]: E1013 05:27:11.797736 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a64742cbe47fe65ef34e80f020843f53798ef62ea75bd7d2bf86d6749375eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-976bd986c-hsx55" Oct 13 05:27:11.798167 kubelet[2786]: E1013 05:27:11.797771 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a64742cbe47fe65ef34e80f020843f53798ef62ea75bd7d2bf86d6749375eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-976bd986c-hsx55" Oct 13 05:27:11.798167 kubelet[2786]: E1013 05:27:11.797864 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-976bd986c-hsx55_calico-system(7a825ede-be6e-47cb-aeed-0d14204409ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-976bd986c-hsx55_calico-system(7a825ede-be6e-47cb-aeed-0d14204409ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a64742cbe47fe65ef34e80f020843f53798ef62ea75bd7d2bf86d6749375eb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-976bd986c-hsx55" podUID="7a825ede-be6e-47cb-aeed-0d14204409ce" Oct 13 05:27:11.976563 containerd[1625]: time="2025-10-13T05:27:11.976479592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:12.102533 containerd[1625]: time="2025-10-13T05:27:12.102340952Z" level=error msg="Failed to destroy network for sandbox \"55d372e7d05ae84b97a28defc64329537ed94ddf5d21d3c69e9c5af7d954cdea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.105637 systemd[1]: run-netns-cni\x2db93f5254\x2d0581\x2dd309\x2d0c5a\x2deb3d83beb7ce.mount: Deactivated successfully. Oct 13 05:27:12.194392 containerd[1625]: time="2025-10-13T05:27:12.194307394Z" level=error msg="Failed to destroy network for sandbox \"a79e69570156d24a8aa184c0fe6034c31796007ca85afcb91593bdd4c9a5e4f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.196766 systemd[1]: run-netns-cni\x2dd3aa156d\x2d4c2b\x2d605e\x2da08c\x2d6478a5732878.mount: Deactivated successfully. Oct 13 05:27:12.305291 containerd[1625]: time="2025-10-13T05:27:12.305204753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:27:12.316840 containerd[1625]: time="2025-10-13T05:27:12.316736940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f997d669-22b4j,Uid:496214e4-ed29-4a93-b78d-1c399cc61c6b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d372e7d05ae84b97a28defc64329537ed94ddf5d21d3c69e9c5af7d954cdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.318892 kubelet[2786]: E1013 05:27:12.318813 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d372e7d05ae84b97a28defc64329537ed94ddf5d21d3c69e9c5af7d954cdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.318975 kubelet[2786]: E1013 05:27:12.318893 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d372e7d05ae84b97a28defc64329537ed94ddf5d21d3c69e9c5af7d954cdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f997d669-22b4j" Oct 13 05:27:12.318975 kubelet[2786]: E1013 05:27:12.318922 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d372e7d05ae84b97a28defc64329537ed94ddf5d21d3c69e9c5af7d954cdea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f997d669-22b4j" Oct 13 05:27:12.319057 kubelet[2786]: E1013 05:27:12.318976 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75f997d669-22b4j_calico-system(496214e4-ed29-4a93-b78d-1c399cc61c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75f997d669-22b4j_calico-system(496214e4-ed29-4a93-b78d-1c399cc61c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55d372e7d05ae84b97a28defc64329537ed94ddf5d21d3c69e9c5af7d954cdea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f997d669-22b4j" podUID="496214e4-ed29-4a93-b78d-1c399cc61c6b" Oct 13 05:27:12.319882 kubelet[2786]: E1013 05:27:12.319854 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:12.320400 kubelet[2786]: E1013 05:27:12.320336 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:12.320954 containerd[1625]: time="2025-10-13T05:27:12.320837297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n8c44,Uid:a0feb9a5-18ab-4324-b59c-f36e6436fc38,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:12.322400 containerd[1625]: time="2025-10-13T05:27:12.322310082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9w82,Uid:518ce9cd-e9e5-4f27-890c-d2fae6ff98e3,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:12.332452 containerd[1625]: time="2025-10-13T05:27:12.331769438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-58p6w,Uid:581c5944-bb01-4f41-be01-d38804d92d32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79e69570156d24a8aa184c0fe6034c31796007ca85afcb91593bdd4c9a5e4f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.332685 kubelet[2786]: E1013 05:27:12.332257 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79e69570156d24a8aa184c0fe6034c31796007ca85afcb91593bdd4c9a5e4f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.332685 kubelet[2786]: E1013 05:27:12.332421 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79e69570156d24a8aa184c0fe6034c31796007ca85afcb91593bdd4c9a5e4f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" Oct 13 05:27:12.332685 kubelet[2786]: E1013 05:27:12.332479 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79e69570156d24a8aa184c0fe6034c31796007ca85afcb91593bdd4c9a5e4f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" Oct 13 05:27:12.332792 kubelet[2786]: E1013 05:27:12.332568 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5db6d8fb56-58p6w_calico-apiserver(581c5944-bb01-4f41-be01-d38804d92d32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5db6d8fb56-58p6w_calico-apiserver(581c5944-bb01-4f41-be01-d38804d92d32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a79e69570156d24a8aa184c0fe6034c31796007ca85afcb91593bdd4c9a5e4f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" podUID="581c5944-bb01-4f41-be01-d38804d92d32" Oct 13 05:27:12.341749 containerd[1625]: time="2025-10-13T05:27:12.341675683Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:12.357969 containerd[1625]: time="2025-10-13T05:27:12.357830278Z" level=error msg="Failed to destroy network for sandbox \"875b32d78824e28198b91b4ff38b2c00d682ae8384af0011b639cc3a07ccbe11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.360398 systemd[1]: run-netns-cni\x2d95734861\x2d04d2\x2d0c4d\x2d7fbb\x2db4f3706cdc86.mount: Deactivated successfully. Oct 13 05:27:12.469210 containerd[1625]: time="2025-10-13T05:27:12.469122408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-57bv6,Uid:ca8ebd82-7860-48a6-ad85-24113129edfb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"875b32d78824e28198b91b4ff38b2c00d682ae8384af0011b639cc3a07ccbe11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.469899 kubelet[2786]: E1013 05:27:12.469490 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"875b32d78824e28198b91b4ff38b2c00d682ae8384af0011b639cc3a07ccbe11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.469899 kubelet[2786]: E1013 05:27:12.469566 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"875b32d78824e28198b91b4ff38b2c00d682ae8384af0011b639cc3a07ccbe11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:27:12.469899 kubelet[2786]: E1013 05:27:12.469590 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"875b32d78824e28198b91b4ff38b2c00d682ae8384af0011b639cc3a07ccbe11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-57bv6" Oct 13 05:27:12.470066 kubelet[2786]: E1013 05:27:12.469648 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-57bv6_calico-system(ca8ebd82-7860-48a6-ad85-24113129edfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-57bv6_calico-system(ca8ebd82-7860-48a6-ad85-24113129edfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"875b32d78824e28198b91b4ff38b2c00d682ae8384af0011b639cc3a07ccbe11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-57bv6" podUID="ca8ebd82-7860-48a6-ad85-24113129edfb" Oct 13 05:27:12.533466 containerd[1625]: time="2025-10-13T05:27:12.533390197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:12.534175 containerd[1625]: time="2025-10-13T05:27:12.534115990Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 13.003552862s" Oct 13 05:27:12.534268 containerd[1625]: time="2025-10-13T05:27:12.534179288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:27:12.546506 containerd[1625]: time="2025-10-13T05:27:12.546413333Z" level=error msg="Failed to destroy network for sandbox \"322e5b79950f563d38825e768dadb07c7412f24bea3f03bf5dfe4f5820b50374\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.551669 containerd[1625]: time="2025-10-13T05:27:12.551606852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n8c44,Uid:a0feb9a5-18ab-4324-b59c-f36e6436fc38,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"322e5b79950f563d38825e768dadb07c7412f24bea3f03bf5dfe4f5820b50374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.551919 kubelet[2786]: E1013 05:27:12.551871 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322e5b79950f563d38825e768dadb07c7412f24bea3f03bf5dfe4f5820b50374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.551982 kubelet[2786]: E1013 05:27:12.551942 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322e5b79950f563d38825e768dadb07c7412f24bea3f03bf5dfe4f5820b50374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n8c44" Oct 13 05:27:12.551982 kubelet[2786]: E1013 05:27:12.551963 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322e5b79950f563d38825e768dadb07c7412f24bea3f03bf5dfe4f5820b50374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n8c44" Oct 13 05:27:12.552068 kubelet[2786]: E1013 05:27:12.552041 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n8c44_kube-system(a0feb9a5-18ab-4324-b59c-f36e6436fc38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n8c44_kube-system(a0feb9a5-18ab-4324-b59c-f36e6436fc38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"322e5b79950f563d38825e768dadb07c7412f24bea3f03bf5dfe4f5820b50374\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n8c44" podUID="a0feb9a5-18ab-4324-b59c-f36e6436fc38" Oct 13 05:27:12.552949 containerd[1625]: time="2025-10-13T05:27:12.552764054Z" level=info msg="CreateContainer within sandbox \"123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:27:12.571735 containerd[1625]: time="2025-10-13T05:27:12.571661426Z" level=info msg="Container 9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:12.594378 containerd[1625]: time="2025-10-13T05:27:12.594289863Z" level=info msg="CreateContainer within sandbox \"123d9aea9a7681955bd06378024a1974cddb39a56e06bbd5741fbd58a0a03868\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8\"" Oct 13 05:27:12.595373 containerd[1625]: time="2025-10-13T05:27:12.595310128Z" level=info msg="StartContainer for \"9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8\"" Oct 13 05:27:12.597302 containerd[1625]: time="2025-10-13T05:27:12.597242405Z" level=info msg="connecting to shim 9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8" address="unix:///run/containerd/s/4dd1db28fa185958cc0af6fe9a9acc2cf1c1bc046c73ef58b3f24c9b6560cfc5" protocol=ttrpc version=3 Oct 13 05:27:12.600536 containerd[1625]: time="2025-10-13T05:27:12.600458072Z" level=error msg="Failed to destroy network for sandbox \"f6258a3a6567919c8cd9d1c6c2761206041e80800f8acc55a5259e68579fad39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.603403 containerd[1625]: time="2025-10-13T05:27:12.602027088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9w82,Uid:518ce9cd-e9e5-4f27-890c-d2fae6ff98e3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6258a3a6567919c8cd9d1c6c2761206041e80800f8acc55a5259e68579fad39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.603557 kubelet[2786]: E1013 05:27:12.602332 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6258a3a6567919c8cd9d1c6c2761206041e80800f8acc55a5259e68579fad39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:12.603557 kubelet[2786]: E1013 05:27:12.602424 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6258a3a6567919c8cd9d1c6c2761206041e80800f8acc55a5259e68579fad39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h9w82" Oct 13 05:27:12.603557 kubelet[2786]: E1013 05:27:12.602452 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6258a3a6567919c8cd9d1c6c2761206041e80800f8acc55a5259e68579fad39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h9w82" Oct 13 05:27:12.603670 kubelet[2786]: E1013 05:27:12.602513 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h9w82_kube-system(518ce9cd-e9e5-4f27-890c-d2fae6ff98e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h9w82_kube-system(518ce9cd-e9e5-4f27-890c-d2fae6ff98e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6258a3a6567919c8cd9d1c6c2761206041e80800f8acc55a5259e68579fad39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h9w82" podUID="518ce9cd-e9e5-4f27-890c-d2fae6ff98e3" Oct 13 05:27:12.746616 systemd[1]: Started cri-containerd-9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8.scope - libcontainer container 9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8. Oct 13 05:27:13.107219 systemd[1]: run-netns-cni\x2de194879d\x2d31ae\x2da8f7\x2d0d52\x2dee73a69d99da.mount: Deactivated successfully. Oct 13 05:27:13.107400 systemd[1]: run-netns-cni\x2d4e43e709\x2d0995\x2d17ea\x2d3ef9\x2dd403d95377f1.mount: Deactivated successfully. Oct 13 05:27:13.782194 containerd[1625]: time="2025-10-13T05:27:13.782125131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckpkj,Uid:7698d12a-0689-4361-88b4-77840a78376a,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:13.911984 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:27:13.913754 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:27:14.322154 containerd[1625]: time="2025-10-13T05:27:14.322105238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-fz9kt,Uid:0243a395-1b48-4301-9350-f25044be2770,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:15.122697 containerd[1625]: time="2025-10-13T05:27:15.122651087Z" level=info msg="StartContainer for \"9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8\" returns successfully" Oct 13 05:27:15.486221 containerd[1625]: time="2025-10-13T05:27:15.436640367Z" level=error msg="Failed to destroy network for sandbox \"959f8124c8840ffa73f72aeeba5e2fdecdd7843b4a6419f44c862a6a6ee51f05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:15.439416 systemd[1]: run-netns-cni\x2de8220c0f\x2d73cc\x2df5be\x2df059\x2d91e0dbc20ba3.mount: Deactivated successfully. Oct 13 05:27:16.226697 containerd[1625]: time="2025-10-13T05:27:16.226632785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8\" id:\"3baaa7662635d6e41c1bf1f0ec78b3569579f50e0b0dfcf4bd227f974da7f886\" pid:4169 exit_status:1 exited_at:{seconds:1760333236 nanos:226207477}" Oct 13 05:27:16.387852 containerd[1625]: time="2025-10-13T05:27:16.387774621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckpkj,Uid:7698d12a-0689-4361-88b4-77840a78376a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"959f8124c8840ffa73f72aeeba5e2fdecdd7843b4a6419f44c862a6a6ee51f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:16.388072 kubelet[2786]: E1013 05:27:16.388008 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959f8124c8840ffa73f72aeeba5e2fdecdd7843b4a6419f44c862a6a6ee51f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:27:16.388528 kubelet[2786]: E1013 05:27:16.388069 2786 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959f8124c8840ffa73f72aeeba5e2fdecdd7843b4a6419f44c862a6a6ee51f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:27:16.388528 kubelet[2786]: E1013 05:27:16.388089 2786 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959f8124c8840ffa73f72aeeba5e2fdecdd7843b4a6419f44c862a6a6ee51f05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ckpkj" Oct 13 05:27:16.388528 kubelet[2786]: E1013 05:27:16.388140 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ckpkj_calico-system(7698d12a-0689-4361-88b4-77840a78376a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ckpkj_calico-system(7698d12a-0689-4361-88b4-77840a78376a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"959f8124c8840ffa73f72aeeba5e2fdecdd7843b4a6419f44c862a6a6ee51f05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ckpkj" podUID="7698d12a-0689-4361-88b4-77840a78376a" Oct 13 05:27:17.157381 kubelet[2786]: I1013 05:27:17.156457 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rtrjl" podStartSLOduration=6.267182919 podStartE2EDuration="35.156404561s" podCreationTimestamp="2025-10-13 05:26:42 +0000 UTC" firstStartedPulling="2025-10-13 05:26:43.646847253 +0000 UTC m=+23.437863542" lastFinishedPulling="2025-10-13 05:27:12.536068895 +0000 UTC m=+52.327085184" observedRunningTime="2025-10-13 05:27:17.155560072 +0000 UTC m=+56.946576361" watchObservedRunningTime="2025-10-13 05:27:17.156404561 +0000 UTC m=+56.947420850" Oct 13 05:27:17.240541 containerd[1625]: time="2025-10-13T05:27:17.240465478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8\" id:\"302b52f868f2bb1a2c546a5027aae8023535fe92b27f4fef1c4fb57554e06069\" pid:4215 exit_status:1 exited_at:{seconds:1760333237 nanos:240092677}" Oct 13 05:27:17.270044 kubelet[2786]: I1013 05:27:17.269980 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-ca-bundle\") pod \"7a825ede-be6e-47cb-aeed-0d14204409ce\" (UID: \"7a825ede-be6e-47cb-aeed-0d14204409ce\") " Oct 13 05:27:17.270044 kubelet[2786]: I1013 05:27:17.270044 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-backend-key-pair\") pod \"7a825ede-be6e-47cb-aeed-0d14204409ce\" (UID: \"7a825ede-be6e-47cb-aeed-0d14204409ce\") " Oct 13 05:27:17.270396 kubelet[2786]: I1013 05:27:17.270073 2786 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btjxd\" (UniqueName: \"kubernetes.io/projected/7a825ede-be6e-47cb-aeed-0d14204409ce-kube-api-access-btjxd\") pod \"7a825ede-be6e-47cb-aeed-0d14204409ce\" (UID: \"7a825ede-be6e-47cb-aeed-0d14204409ce\") " Oct 13 05:27:17.270690 kubelet[2786]: I1013 05:27:17.270613 2786 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7a825ede-be6e-47cb-aeed-0d14204409ce" (UID: "7a825ede-be6e-47cb-aeed-0d14204409ce"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:27:17.276543 systemd[1]: var-lib-kubelet-pods-7a825ede\x2dbe6e\x2d47cb\x2daeed\x2d0d14204409ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbtjxd.mount: Deactivated successfully. Oct 13 05:27:17.276682 systemd[1]: var-lib-kubelet-pods-7a825ede\x2dbe6e\x2d47cb\x2daeed\x2d0d14204409ce-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:27:17.278489 kubelet[2786]: I1013 05:27:17.277860 2786 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a825ede-be6e-47cb-aeed-0d14204409ce-kube-api-access-btjxd" (OuterVolumeSpecName: "kube-api-access-btjxd") pod "7a825ede-be6e-47cb-aeed-0d14204409ce" (UID: "7a825ede-be6e-47cb-aeed-0d14204409ce"). InnerVolumeSpecName "kube-api-access-btjxd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:27:17.278489 kubelet[2786]: I1013 05:27:17.277962 2786 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7a825ede-be6e-47cb-aeed-0d14204409ce" (UID: "7a825ede-be6e-47cb-aeed-0d14204409ce"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:27:17.370555 kubelet[2786]: I1013 05:27:17.370495 2786 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:27:17.370555 kubelet[2786]: I1013 05:27:17.370535 2786 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7a825ede-be6e-47cb-aeed-0d14204409ce-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:27:17.370555 kubelet[2786]: I1013 05:27:17.370544 2786 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-btjxd\" (UniqueName: \"kubernetes.io/projected/7a825ede-be6e-47cb-aeed-0d14204409ce-kube-api-access-btjxd\") on node \"localhost\" DevicePath \"\"" Oct 13 05:27:18.137472 systemd[1]: Removed slice kubepods-besteffort-pod7a825ede_be6e_47cb_aeed_0d14204409ce.slice - libcontainer container kubepods-besteffort-pod7a825ede_be6e_47cb_aeed_0d14204409ce.slice. Oct 13 05:27:21.297742 systemd-networkd[1518]: calied8dae0069f: Link UP Oct 13 05:27:21.298044 systemd-networkd[1518]: calied8dae0069f: Gained carrier Oct 13 05:27:21.861535 containerd[1625]: 2025-10-13 05:27:16.537 [INFO][4190] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:27:21.861535 containerd[1625]: 2025-10-13 05:27:17.653 [INFO][4190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0 calico-apiserver-5db6d8fb56- calico-apiserver 0243a395-1b48-4301-9350-f25044be2770 850 0 2025-10-13 05:26:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5db6d8fb56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5db6d8fb56-fz9kt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calied8dae0069f [] [] }} ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-" Oct 13 05:27:21.861535 containerd[1625]: 2025-10-13 05:27:17.653 [INFO][4190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" Oct 13 05:27:21.861535 containerd[1625]: 2025-10-13 05:27:19.167 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" HandleID="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Workload="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:19.168 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" HandleID="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Workload="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000ae5a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5db6d8fb56-fz9kt", "timestamp":"2025-10-13 05:27:19.167880483 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:19.168 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:19.168 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:19.168 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:19.262 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" host="localhost" Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:20.456 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:20.818 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:20.820 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:20.822 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:21.863278 containerd[1625]: 2025-10-13 05:27:20.822 [INFO][4233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" host="localhost" Oct 13 05:27:21.863593 containerd[1625]: 2025-10-13 05:27:20.824 [INFO][4233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544 Oct 13 05:27:21.863593 containerd[1625]: 2025-10-13 05:27:20.851 [INFO][4233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" host="localhost" Oct 13 05:27:21.863593 containerd[1625]: 2025-10-13 05:27:21.265 [INFO][4233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" host="localhost" Oct 13 05:27:21.863593 containerd[1625]: 2025-10-13 05:27:21.265 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" host="localhost" Oct 13 05:27:21.863593 containerd[1625]: 2025-10-13 05:27:21.266 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:21.863593 containerd[1625]: 2025-10-13 05:27:21.266 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" HandleID="k8s-pod-network.32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Workload="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" Oct 13 05:27:21.863763 containerd[1625]: 2025-10-13 05:27:21.270 [INFO][4190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0", GenerateName:"calico-apiserver-5db6d8fb56-", Namespace:"calico-apiserver", SelfLink:"", UID:"0243a395-1b48-4301-9350-f25044be2770", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db6d8fb56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5db6d8fb56-fz9kt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied8dae0069f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:21.863860 containerd[1625]: 2025-10-13 05:27:21.270 [INFO][4190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" Oct 13 05:27:21.863860 containerd[1625]: 2025-10-13 05:27:21.270 [INFO][4190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied8dae0069f ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" Oct 13 05:27:21.863860 containerd[1625]: 2025-10-13 05:27:21.298 [INFO][4190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" Oct 13 05:27:21.863959 containerd[1625]: 2025-10-13 05:27:21.298 [INFO][4190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0", GenerateName:"calico-apiserver-5db6d8fb56-", Namespace:"calico-apiserver", SelfLink:"", UID:"0243a395-1b48-4301-9350-f25044be2770", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db6d8fb56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544", Pod:"calico-apiserver-5db6d8fb56-fz9kt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calied8dae0069f", MAC:"ea:13:b7:ef:49:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:21.864045 containerd[1625]: 2025-10-13 05:27:21.855 [INFO][4190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-fz9kt" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--fz9kt-eth0" Oct 13 05:27:22.323232 kubelet[2786]: I1013 05:27:22.323173 2786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a825ede-be6e-47cb-aeed-0d14204409ce" path="/var/lib/kubelet/pods/7a825ede-be6e-47cb-aeed-0d14204409ce/volumes" Oct 13 05:27:22.725597 systemd-networkd[1518]: calied8dae0069f: Gained IPv6LL Oct 13 05:27:22.946038 systemd[1]: Created slice kubepods-besteffort-podab27cf4e_2db0_4b1d_bb1e_b60f9e7715f0.slice - libcontainer container kubepods-besteffort-podab27cf4e_2db0_4b1d_bb1e_b60f9e7715f0.slice. Oct 13 05:27:23.006023 kubelet[2786]: I1013 05:27:23.005904 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ffdc\" (UniqueName: \"kubernetes.io/projected/ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0-kube-api-access-2ffdc\") pod \"whisker-849c4656db-l2tnk\" (UID: \"ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0\") " pod="calico-system/whisker-849c4656db-l2tnk" Oct 13 05:27:23.006023 kubelet[2786]: I1013 05:27:23.005973 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0-whisker-backend-key-pair\") pod \"whisker-849c4656db-l2tnk\" (UID: \"ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0\") " pod="calico-system/whisker-849c4656db-l2tnk" Oct 13 05:27:23.006313 kubelet[2786]: I1013 05:27:23.006084 2786 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0-whisker-ca-bundle\") pod \"whisker-849c4656db-l2tnk\" (UID: \"ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0\") " pod="calico-system/whisker-849c4656db-l2tnk" Oct 13 05:27:23.320433 containerd[1625]: time="2025-10-13T05:27:23.320276562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-57bv6,Uid:ca8ebd82-7860-48a6-ad85-24113129edfb,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:23.340322 containerd[1625]: time="2025-10-13T05:27:23.338572969Z" level=info msg="connecting to shim 32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544" address="unix:///run/containerd/s/ddd20a69822f363f3c99f8bf5dcd67a3c63f098879a945efed06c11d8d6e7eb6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:23.386599 systemd[1]: Started cri-containerd-32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544.scope - libcontainer container 32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544. Oct 13 05:27:23.400004 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:23.663283 containerd[1625]: time="2025-10-13T05:27:23.663095099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-fz9kt,Uid:0243a395-1b48-4301-9350-f25044be2770,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544\"" Oct 13 05:27:23.665151 containerd[1625]: time="2025-10-13T05:27:23.665110317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:27:24.150148 containerd[1625]: time="2025-10-13T05:27:24.150083673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849c4656db-l2tnk,Uid:ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:24.319674 kubelet[2786]: E1013 05:27:24.319615 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:24.320198 containerd[1625]: time="2025-10-13T05:27:24.320105678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9w82,Uid:518ce9cd-e9e5-4f27-890c-d2fae6ff98e3,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:24.925217 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:47714.service - OpenSSH per-connection server daemon (10.0.0.1:47714). Oct 13 05:27:25.093287 sshd[4483]: Accepted publickey for core from 10.0.0.1 port 47714 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:25.103862 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:25.115202 systemd-networkd[1518]: calif20fd18bb21: Link UP Oct 13 05:27:25.115527 systemd-networkd[1518]: calif20fd18bb21: Gained carrier Oct 13 05:27:25.117767 systemd-logind[1600]: New session 8 of user core. Oct 13 05:27:25.122716 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:27:25.160029 containerd[1625]: 2025-10-13 05:27:23.437 [INFO][4422] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:27:25.160029 containerd[1625]: 2025-10-13 05:27:24.468 [INFO][4422] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--57bv6-eth0 goldmane-54d579b49d- calico-system ca8ebd82-7860-48a6-ad85-24113129edfb 854 0 2025-10-13 05:26:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-57bv6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif20fd18bb21 [] [] }} ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-" Oct 13 05:27:25.160029 containerd[1625]: 2025-10-13 05:27:24.469 [INFO][4422] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" Oct 13 05:27:25.160029 containerd[1625]: 2025-10-13 05:27:24.981 [INFO][4449] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" HandleID="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Workload="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:24.982 [INFO][4449] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" HandleID="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Workload="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-57bv6", "timestamp":"2025-10-13 05:27:24.981932055 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:24.982 [INFO][4449] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:24.982 [INFO][4449] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:24.982 [INFO][4449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:25.013 [INFO][4449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" host="localhost" Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:25.029 [INFO][4449] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:25.042 [INFO][4449] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:25.057 [INFO][4449] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:25.063 [INFO][4449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:25.160679 containerd[1625]: 2025-10-13 05:27:25.064 [INFO][4449] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" host="localhost" Oct 13 05:27:25.160912 containerd[1625]: 2025-10-13 05:27:25.069 [INFO][4449] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4 Oct 13 05:27:25.160912 containerd[1625]: 2025-10-13 05:27:25.082 [INFO][4449] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" host="localhost" Oct 13 05:27:25.160912 containerd[1625]: 2025-10-13 05:27:25.098 [INFO][4449] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" host="localhost" Oct 13 05:27:25.160912 containerd[1625]: 2025-10-13 05:27:25.099 [INFO][4449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" host="localhost" Oct 13 05:27:25.160912 containerd[1625]: 2025-10-13 05:27:25.099 [INFO][4449] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:25.160912 containerd[1625]: 2025-10-13 05:27:25.099 [INFO][4449] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" HandleID="k8s-pod-network.5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Workload="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" Oct 13 05:27:25.161071 containerd[1625]: 2025-10-13 05:27:25.105 [INFO][4422] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--57bv6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ca8ebd82-7860-48a6-ad85-24113129edfb", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-57bv6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif20fd18bb21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:25.161071 containerd[1625]: 2025-10-13 05:27:25.106 [INFO][4422] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" Oct 13 05:27:25.161165 containerd[1625]: 2025-10-13 05:27:25.106 [INFO][4422] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif20fd18bb21 ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" Oct 13 05:27:25.161165 containerd[1625]: 2025-10-13 05:27:25.119 [INFO][4422] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" Oct 13 05:27:25.161237 containerd[1625]: 2025-10-13 05:27:25.121 [INFO][4422] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--57bv6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ca8ebd82-7860-48a6-ad85-24113129edfb", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4", Pod:"goldmane-54d579b49d-57bv6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif20fd18bb21", MAC:"16:b0:0f:c6:13:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:25.161288 containerd[1625]: 2025-10-13 05:27:25.153 [INFO][4422] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" Namespace="calico-system" Pod="goldmane-54d579b49d-57bv6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--57bv6-eth0" Oct 13 05:27:25.218841 systemd-networkd[1518]: calica8fb6b00c3: Link UP Oct 13 05:27:25.222483 systemd-networkd[1518]: calica8fb6b00c3: Gained carrier Oct 13 05:27:25.297225 containerd[1625]: 2025-10-13 05:27:25.010 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--849c4656db--l2tnk-eth0 whisker-849c4656db- calico-system ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0 974 0 2025-10-13 05:27:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:849c4656db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-849c4656db-l2tnk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calica8fb6b00c3 [] [] }} ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-" Oct 13 05:27:25.297225 containerd[1625]: 2025-10-13 05:27:25.011 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-eth0" Oct 13 05:27:25.297225 containerd[1625]: 2025-10-13 05:27:25.073 [INFO][4502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" HandleID="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Workload="localhost-k8s-whisker--849c4656db--l2tnk-eth0" Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.073 [INFO][4502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" HandleID="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Workload="localhost-k8s-whisker--849c4656db--l2tnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000518240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-849c4656db-l2tnk", "timestamp":"2025-10-13 05:27:25.07316723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.073 [INFO][4502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.099 [INFO][4502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.099 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.123 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" host="localhost" Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.139 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.161 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.168 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.172 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:25.297824 containerd[1625]: 2025-10-13 05:27:25.173 [INFO][4502] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" host="localhost" Oct 13 05:27:25.298611 containerd[1625]: 2025-10-13 05:27:25.178 [INFO][4502] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b Oct 13 05:27:25.298611 containerd[1625]: 2025-10-13 05:27:25.194 [INFO][4502] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" host="localhost" Oct 13 05:27:25.298611 containerd[1625]: 2025-10-13 05:27:25.203 [INFO][4502] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" host="localhost" Oct 13 05:27:25.298611 containerd[1625]: 2025-10-13 05:27:25.203 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" host="localhost" Oct 13 05:27:25.298611 containerd[1625]: 2025-10-13 05:27:25.203 [INFO][4502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:25.298611 containerd[1625]: 2025-10-13 05:27:25.203 [INFO][4502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" HandleID="k8s-pod-network.e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Workload="localhost-k8s-whisker--849c4656db--l2tnk-eth0" Oct 13 05:27:25.299089 containerd[1625]: 2025-10-13 05:27:25.211 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--849c4656db--l2tnk-eth0", GenerateName:"whisker-849c4656db-", Namespace:"calico-system", SelfLink:"", UID:"ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"849c4656db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-849c4656db-l2tnk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica8fb6b00c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:25.299089 containerd[1625]: 2025-10-13 05:27:25.212 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-eth0" Oct 13 05:27:25.299182 containerd[1625]: 2025-10-13 05:27:25.212 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica8fb6b00c3 ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-eth0" Oct 13 05:27:25.299182 containerd[1625]: 2025-10-13 05:27:25.226 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-eth0" Oct 13 05:27:25.299338 containerd[1625]: 2025-10-13 05:27:25.234 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--849c4656db--l2tnk-eth0", GenerateName:"whisker-849c4656db-", Namespace:"calico-system", SelfLink:"", UID:"ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 27, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"849c4656db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b", Pod:"whisker-849c4656db-l2tnk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calica8fb6b00c3", MAC:"ae:3d:4d:5f:5e:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:25.299409 containerd[1625]: 2025-10-13 05:27:25.293 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" Namespace="calico-system" Pod="whisker-849c4656db-l2tnk" WorkloadEndpoint="localhost-k8s-whisker--849c4656db--l2tnk-eth0" Oct 13 05:27:25.321098 containerd[1625]: time="2025-10-13T05:27:25.321001956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f997d669-22b4j,Uid:496214e4-ed29-4a93-b78d-1c399cc61c6b,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:25.321646 kubelet[2786]: E1013 05:27:25.321556 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:25.322940 containerd[1625]: time="2025-10-13T05:27:25.321515082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-58p6w,Uid:581c5944-bb01-4f41-be01-d38804d92d32,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:27:25.322940 containerd[1625]: time="2025-10-13T05:27:25.322713174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n8c44,Uid:a0feb9a5-18ab-4324-b59c-f36e6436fc38,Namespace:kube-system,Attempt:0,}" Oct 13 05:27:25.400540 systemd-networkd[1518]: vxlan.calico: Link UP Oct 13 05:27:25.400551 systemd-networkd[1518]: vxlan.calico: Gained carrier Oct 13 05:27:25.515019 sshd[4521]: Connection closed by 10.0.0.1 port 47714 Oct 13 05:27:25.516502 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:25.528529 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:47714.service: Deactivated successfully. Oct 13 05:27:25.531274 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:27:25.532508 systemd-logind[1600]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:27:25.534548 systemd-logind[1600]: Removed session 8. Oct 13 05:27:25.583678 systemd-networkd[1518]: cali8bcc412f426: Link UP Oct 13 05:27:25.584982 systemd-networkd[1518]: cali8bcc412f426: Gained carrier Oct 13 05:27:25.668157 containerd[1625]: 2025-10-13 05:27:25.044 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--h9w82-eth0 coredns-674b8bbfcf- kube-system 518ce9cd-e9e5-4f27-890c-d2fae6ff98e3 846 0 2025-10-13 05:26:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-h9w82 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8bcc412f426 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-" Oct 13 05:27:25.668157 containerd[1625]: 2025-10-13 05:27:25.044 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" Oct 13 05:27:25.668157 containerd[1625]: 2025-10-13 05:27:25.098 [INFO][4509] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" HandleID="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Workload="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.098 [INFO][4509] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" HandleID="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Workload="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-h9w82", "timestamp":"2025-10-13 05:27:25.098094344 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.098 [INFO][4509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.203 [INFO][4509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.203 [INFO][4509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.235 [INFO][4509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" host="localhost" Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.294 [INFO][4509] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.489 [INFO][4509] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.512 [INFO][4509] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.515 [INFO][4509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:25.668440 containerd[1625]: 2025-10-13 05:27:25.516 [INFO][4509] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" host="localhost" Oct 13 05:27:25.668662 containerd[1625]: 2025-10-13 05:27:25.517 [INFO][4509] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa Oct 13 05:27:25.668662 containerd[1625]: 2025-10-13 05:27:25.553 [INFO][4509] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" host="localhost" Oct 13 05:27:25.668662 containerd[1625]: 2025-10-13 05:27:25.576 [INFO][4509] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" host="localhost" Oct 13 05:27:25.668662 containerd[1625]: 2025-10-13 05:27:25.576 [INFO][4509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" host="localhost" Oct 13 05:27:25.668662 containerd[1625]: 2025-10-13 05:27:25.576 [INFO][4509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:25.668662 containerd[1625]: 2025-10-13 05:27:25.576 [INFO][4509] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" HandleID="k8s-pod-network.ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Workload="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" Oct 13 05:27:25.668788 containerd[1625]: 2025-10-13 05:27:25.580 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h9w82-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"518ce9cd-e9e5-4f27-890c-d2fae6ff98e3", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-h9w82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bcc412f426", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:25.668878 containerd[1625]: 2025-10-13 05:27:25.581 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" Oct 13 05:27:25.668878 containerd[1625]: 2025-10-13 05:27:25.581 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bcc412f426 ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" Oct 13 05:27:25.668878 containerd[1625]: 2025-10-13 05:27:25.583 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" Oct 13 05:27:25.668995 containerd[1625]: 2025-10-13 05:27:25.586 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h9w82-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"518ce9cd-e9e5-4f27-890c-d2fae6ff98e3", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa", Pod:"coredns-674b8bbfcf-h9w82", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bcc412f426", MAC:"96:3f:ae:00:f6:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:25.668995 containerd[1625]: 2025-10-13 05:27:25.664 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" Namespace="kube-system" Pod="coredns-674b8bbfcf-h9w82" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h9w82-eth0" Oct 13 05:27:25.948418 containerd[1625]: time="2025-10-13T05:27:25.948216523Z" level=info msg="connecting to shim 5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4" address="unix:///run/containerd/s/f4a77e284941bb25018fe725c5e637c9e0dd01aa991e556b54c1d5863b33a6d0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:26.029545 systemd[1]: Started cri-containerd-5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4.scope - libcontainer container 5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4. Oct 13 05:27:26.044723 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:26.437561 systemd-networkd[1518]: calif20fd18bb21: Gained IPv6LL Oct 13 05:27:26.566402 systemd-networkd[1518]: calica8fb6b00c3: Gained IPv6LL Oct 13 05:27:26.757631 systemd-networkd[1518]: vxlan.calico: Gained IPv6LL Oct 13 05:27:26.758407 systemd-networkd[1518]: cali8bcc412f426: Gained IPv6LL Oct 13 05:27:26.776744 containerd[1625]: time="2025-10-13T05:27:26.776684110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-57bv6,Uid:ca8ebd82-7860-48a6-ad85-24113129edfb,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4\"" Oct 13 05:27:26.819668 systemd-networkd[1518]: cali6ce2b2d1a57: Link UP Oct 13 05:27:26.820710 systemd-networkd[1518]: cali6ce2b2d1a57: Gained carrier Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.474 [INFO][4683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0 calico-kube-controllers-75f997d669- calico-system 496214e4-ed29-4a93-b78d-1c399cc61c6b 853 0 2025-10-13 05:26:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75f997d669 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75f997d669-22b4j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6ce2b2d1a57 [] [] }} ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.474 [INFO][4683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.504 [INFO][4714] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" HandleID="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Workload="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.504 [INFO][4714] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" HandleID="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Workload="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75f997d669-22b4j", "timestamp":"2025-10-13 05:27:26.504335061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.504 [INFO][4714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.504 [INFO][4714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.504 [INFO][4714] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.511 [INFO][4714] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.515 [INFO][4714] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.520 [INFO][4714] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.523 [INFO][4714] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.526 [INFO][4714] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.526 [INFO][4714] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.529 [INFO][4714] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.619 [INFO][4714] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.813 [INFO][4714] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.813 [INFO][4714] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" host="localhost" Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.813 [INFO][4714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:27.104883 containerd[1625]: 2025-10-13 05:27:26.813 [INFO][4714] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" HandleID="k8s-pod-network.739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Workload="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" Oct 13 05:27:27.106075 containerd[1625]: 2025-10-13 05:27:26.816 [INFO][4683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0", GenerateName:"calico-kube-controllers-75f997d669-", Namespace:"calico-system", SelfLink:"", UID:"496214e4-ed29-4a93-b78d-1c399cc61c6b", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f997d669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75f997d669-22b4j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ce2b2d1a57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:27.106075 containerd[1625]: 2025-10-13 05:27:26.816 [INFO][4683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" Oct 13 05:27:27.106075 containerd[1625]: 2025-10-13 05:27:26.816 [INFO][4683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ce2b2d1a57 ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" Oct 13 05:27:27.106075 containerd[1625]: 2025-10-13 05:27:26.821 [INFO][4683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" Oct 13 05:27:27.106075 containerd[1625]: 2025-10-13 05:27:26.821 [INFO][4683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0", GenerateName:"calico-kube-controllers-75f997d669-", Namespace:"calico-system", SelfLink:"", UID:"496214e4-ed29-4a93-b78d-1c399cc61c6b", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f997d669", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea", Pod:"calico-kube-controllers-75f997d669-22b4j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ce2b2d1a57", MAC:"52:e9:67:f3:1f:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:27.106075 containerd[1625]: 2025-10-13 05:27:27.101 [INFO][4683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" Namespace="calico-system" Pod="calico-kube-controllers-75f997d669-22b4j" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75f997d669--22b4j-eth0" Oct 13 05:27:27.870664 systemd-networkd[1518]: cali730c056d2f1: Link UP Oct 13 05:27:27.873007 systemd-networkd[1518]: cali730c056d2f1: Gained carrier Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:26.474 [INFO][4698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--n8c44-eth0 coredns-674b8bbfcf- kube-system a0feb9a5-18ab-4324-b59c-f36e6436fc38 852 0 2025-10-13 05:26:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-n8c44 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali730c056d2f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:26.474 [INFO][4698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:26.533 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" HandleID="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Workload="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:26.533 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" HandleID="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Workload="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-n8c44", "timestamp":"2025-10-13 05:27:26.533658848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:26.534 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:26.813 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:26.814 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.097 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.143 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.150 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.157 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.454 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.455 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.458 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684 Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.474 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.861 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.861 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" host="localhost" Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.861 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:28.185621 containerd[1625]: 2025-10-13 05:27:27.861 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" HandleID="k8s-pod-network.ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Workload="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" Oct 13 05:27:28.187964 containerd[1625]: 2025-10-13 05:27:27.864 [INFO][4698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n8c44-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a0feb9a5-18ab-4324-b59c-f36e6436fc38", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-n8c44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali730c056d2f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:28.187964 containerd[1625]: 2025-10-13 05:27:27.864 [INFO][4698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" Oct 13 05:27:28.187964 containerd[1625]: 2025-10-13 05:27:27.864 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali730c056d2f1 ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" Oct 13 05:27:28.187964 containerd[1625]: 2025-10-13 05:27:27.873 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" Oct 13 05:27:28.187964 containerd[1625]: 2025-10-13 05:27:27.874 [INFO][4698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n8c44-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a0feb9a5-18ab-4324-b59c-f36e6436fc38", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684", Pod:"coredns-674b8bbfcf-n8c44", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali730c056d2f1", MAC:"26:f8:a8:ee:8a:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:28.187964 containerd[1625]: 2025-10-13 05:27:28.181 [INFO][4698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" Namespace="kube-system" Pod="coredns-674b8bbfcf-n8c44" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n8c44-eth0" Oct 13 05:27:28.484955 systemd-networkd[1518]: cali86860e21d54: Link UP Oct 13 05:27:28.486425 systemd-networkd[1518]: cali86860e21d54: Gained carrier Oct 13 05:27:28.677605 systemd-networkd[1518]: cali6ce2b2d1a57: Gained IPv6LL Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:26.622 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0 calico-apiserver-5db6d8fb56- calico-apiserver 581c5944-bb01-4f41-be01-d38804d92d32 848 0 2025-10-13 05:26:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5db6d8fb56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5db6d8fb56-58p6w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali86860e21d54 [] [] }} ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:26.622 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:27.125 [INFO][4745] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" HandleID="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Workload="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:27.126 [INFO][4745] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" HandleID="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Workload="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037d4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5db6d8fb56-58p6w", "timestamp":"2025-10-13 05:27:27.12584037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:27.126 [INFO][4745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:27.861 [INFO][4745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:27.861 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.179 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.189 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.196 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.199 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.202 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.202 [INFO][4745] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.203 [INFO][4745] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.261 [INFO][4745] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.478 [INFO][4745] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.478 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" host="localhost" Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.479 [INFO][4745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:28.817415 containerd[1625]: 2025-10-13 05:27:28.479 [INFO][4745] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" HandleID="k8s-pod-network.9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Workload="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" Oct 13 05:27:28.818034 containerd[1625]: 2025-10-13 05:27:28.482 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0", GenerateName:"calico-apiserver-5db6d8fb56-", Namespace:"calico-apiserver", SelfLink:"", UID:"581c5944-bb01-4f41-be01-d38804d92d32", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db6d8fb56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5db6d8fb56-58p6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86860e21d54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:28.818034 containerd[1625]: 2025-10-13 05:27:28.482 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" Oct 13 05:27:28.818034 containerd[1625]: 2025-10-13 05:27:28.482 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86860e21d54 ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" Oct 13 05:27:28.818034 containerd[1625]: 2025-10-13 05:27:28.485 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" Oct 13 05:27:28.818034 containerd[1625]: 2025-10-13 05:27:28.486 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0", GenerateName:"calico-apiserver-5db6d8fb56-", Namespace:"calico-apiserver", SelfLink:"", UID:"581c5944-bb01-4f41-be01-d38804d92d32", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db6d8fb56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b", Pod:"calico-apiserver-5db6d8fb56-58p6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86860e21d54", MAC:"a6:d4:f8:72:1c:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:28.818034 containerd[1625]: 2025-10-13 05:27:28.813 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" Namespace="calico-apiserver" Pod="calico-apiserver-5db6d8fb56-58p6w" WorkloadEndpoint="localhost-k8s-calico--apiserver--5db6d8fb56--58p6w-eth0" Oct 13 05:27:28.896222 containerd[1625]: time="2025-10-13T05:27:28.896169856Z" level=info msg="connecting to shim ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa" address="unix:///run/containerd/s/2017bf8735dc6a60504fddc60e6f29e6d24f1f6cb0090e65e2002766b400da47" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:28.924539 systemd[1]: Started cri-containerd-ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa.scope - libcontainer container ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa. Oct 13 05:27:28.937367 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:28.975186 containerd[1625]: time="2025-10-13T05:27:28.975112070Z" level=info msg="connecting to shim e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b" address="unix:///run/containerd/s/26550d6b77855dbaf0460a7b99e77d2219e7fab6337b94fca44bf587d02d4b81" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:28.999506 systemd[1]: Started cri-containerd-e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b.scope - libcontainer container e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b. Oct 13 05:27:29.012612 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:29.431537 containerd[1625]: time="2025-10-13T05:27:29.431471646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h9w82,Uid:518ce9cd-e9e5-4f27-890c-d2fae6ff98e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa\"" Oct 13 05:27:29.432436 kubelet[2786]: E1013 05:27:29.432406 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:29.445591 systemd-networkd[1518]: cali730c056d2f1: Gained IPv6LL Oct 13 05:27:29.547519 containerd[1625]: time="2025-10-13T05:27:29.547461814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849c4656db-l2tnk,Uid:ab27cf4e-2db0-4b1d-bb1e-b60f9e7715f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b\"" Oct 13 05:27:29.556793 containerd[1625]: time="2025-10-13T05:27:29.556737184Z" level=info msg="CreateContainer within sandbox \"ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:27:30.182122 containerd[1625]: time="2025-10-13T05:27:30.182053933Z" level=info msg="connecting to shim 739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea" address="unix:///run/containerd/s/e2b4f17b5fdc26b6f12cfaff0738b84afec0e4fe712436d9af4968ab4a11b4cb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:30.207513 systemd[1]: Started cri-containerd-739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea.scope - libcontainer container 739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea. Oct 13 05:27:30.222320 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:30.280994 containerd[1625]: time="2025-10-13T05:27:30.280915467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f997d669-22b4j,Uid:496214e4-ed29-4a93-b78d-1c399cc61c6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea\"" Oct 13 05:27:30.301591 containerd[1625]: time="2025-10-13T05:27:30.301487121Z" level=info msg="connecting to shim ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684" address="unix:///run/containerd/s/dad407e5de7c7fa57c9a23f467bb9bf8ac3849d36612b9d5880c4f66f868e635" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:30.307389 containerd[1625]: time="2025-10-13T05:27:30.306884292Z" level=info msg="Container 30873fefd8e57b196fb0f8a570edefabb9d4ae565cf6528e6485ab8e6f09777c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:30.307699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520574936.mount: Deactivated successfully. Oct 13 05:27:30.315950 containerd[1625]: time="2025-10-13T05:27:30.315888694Z" level=info msg="connecting to shim 9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b" address="unix:///run/containerd/s/216601163b99ab31a83d55bb1e06bab5d16829940e92986862f196b286abb2cb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:30.327264 containerd[1625]: time="2025-10-13T05:27:30.326570390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckpkj,Uid:7698d12a-0689-4361-88b4-77840a78376a,Namespace:calico-system,Attempt:0,}" Oct 13 05:27:30.330795 containerd[1625]: time="2025-10-13T05:27:30.330740930Z" level=info msg="CreateContainer within sandbox \"ec20bf2105f5ada66bfc59d6072e0935d973ce56fab9b620eafda7f38bcc9efa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30873fefd8e57b196fb0f8a570edefabb9d4ae565cf6528e6485ab8e6f09777c\"" Oct 13 05:27:30.331633 containerd[1625]: time="2025-10-13T05:27:30.331579507Z" level=info msg="StartContainer for \"30873fefd8e57b196fb0f8a570edefabb9d4ae565cf6528e6485ab8e6f09777c\"" Oct 13 05:27:30.333463 containerd[1625]: time="2025-10-13T05:27:30.333327115Z" level=info msg="connecting to shim 30873fefd8e57b196fb0f8a570edefabb9d4ae565cf6528e6485ab8e6f09777c" address="unix:///run/containerd/s/2017bf8735dc6a60504fddc60e6f29e6d24f1f6cb0090e65e2002766b400da47" protocol=ttrpc version=3 Oct 13 05:27:30.362997 systemd[1]: Started cri-containerd-9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b.scope - libcontainer container 9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b. Oct 13 05:27:30.373788 systemd[1]: Started cri-containerd-30873fefd8e57b196fb0f8a570edefabb9d4ae565cf6528e6485ab8e6f09777c.scope - libcontainer container 30873fefd8e57b196fb0f8a570edefabb9d4ae565cf6528e6485ab8e6f09777c. Oct 13 05:27:30.380469 systemd[1]: Started cri-containerd-ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684.scope - libcontainer container ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684. Oct 13 05:27:30.406453 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:30.406848 systemd-networkd[1518]: cali86860e21d54: Gained IPv6LL Oct 13 05:27:30.420853 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:30.471684 containerd[1625]: time="2025-10-13T05:27:30.471233767Z" level=info msg="StartContainer for \"30873fefd8e57b196fb0f8a570edefabb9d4ae565cf6528e6485ab8e6f09777c\" returns successfully" Oct 13 05:27:30.485206 containerd[1625]: time="2025-10-13T05:27:30.485031983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n8c44,Uid:a0feb9a5-18ab-4324-b59c-f36e6436fc38,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684\"" Oct 13 05:27:30.487065 kubelet[2786]: E1013 05:27:30.486099 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:30.499711 containerd[1625]: time="2025-10-13T05:27:30.499628429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db6d8fb56-58p6w,Uid:581c5944-bb01-4f41-be01-d38804d92d32,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b\"" Oct 13 05:27:30.502473 containerd[1625]: time="2025-10-13T05:27:30.501340209Z" level=info msg="CreateContainer within sandbox \"ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:27:30.533805 containerd[1625]: time="2025-10-13T05:27:30.533754083Z" level=info msg="Container 96d3f47087e67842fcbcca99d0c782be4b48af96aaba7c8cb7170f6b5a18f73c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:30.535864 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:47114.service - OpenSSH per-connection server daemon (10.0.0.1:47114). Oct 13 05:27:30.547078 containerd[1625]: time="2025-10-13T05:27:30.547030128Z" level=info msg="CreateContainer within sandbox \"ec08eb3fefbce5484ee8a13253870fd40514f438a6dd55f468ed58a3853e2684\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96d3f47087e67842fcbcca99d0c782be4b48af96aaba7c8cb7170f6b5a18f73c\"" Oct 13 05:27:30.550470 containerd[1625]: time="2025-10-13T05:27:30.550411667Z" level=info msg="StartContainer for \"96d3f47087e67842fcbcca99d0c782be4b48af96aaba7c8cb7170f6b5a18f73c\"" Oct 13 05:27:30.553023 containerd[1625]: time="2025-10-13T05:27:30.552752352Z" level=info msg="connecting to shim 96d3f47087e67842fcbcca99d0c782be4b48af96aaba7c8cb7170f6b5a18f73c" address="unix:///run/containerd/s/dad407e5de7c7fa57c9a23f467bb9bf8ac3849d36612b9d5880c4f66f868e635" protocol=ttrpc version=3 Oct 13 05:27:30.598762 systemd[1]: Started cri-containerd-96d3f47087e67842fcbcca99d0c782be4b48af96aaba7c8cb7170f6b5a18f73c.scope - libcontainer container 96d3f47087e67842fcbcca99d0c782be4b48af96aaba7c8cb7170f6b5a18f73c. Oct 13 05:27:30.615655 systemd-networkd[1518]: cali08208f49ae7: Link UP Oct 13 05:27:30.615909 systemd-networkd[1518]: cali08208f49ae7: Gained carrier Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.429 [INFO][4989] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ckpkj-eth0 csi-node-driver- calico-system 7698d12a-0689-4361-88b4-77840a78376a 726 0 2025-10-13 05:26:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ckpkj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali08208f49ae7 [] [] }} ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.429 [INFO][4989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-eth0" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.499 [INFO][5036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" HandleID="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Workload="localhost-k8s-csi--node--driver--ckpkj-eth0" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.502 [INFO][5036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" HandleID="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Workload="localhost-k8s-csi--node--driver--ckpkj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034c0a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ckpkj", "timestamp":"2025-10-13 05:27:30.499246558 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.503 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.503 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.503 [INFO][5036] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.515 [INFO][5036] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.526 [INFO][5036] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.538 [INFO][5036] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.541 [INFO][5036] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.550 [INFO][5036] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.550 [INFO][5036] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.555 [INFO][5036] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.572 [INFO][5036] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.597 [INFO][5036] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.597 [INFO][5036] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" host="localhost" Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.597 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:27:30.653775 containerd[1625]: 2025-10-13 05:27:30.597 [INFO][5036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" HandleID="k8s-pod-network.4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Workload="localhost-k8s-csi--node--driver--ckpkj-eth0" Oct 13 05:27:30.654539 containerd[1625]: 2025-10-13 05:27:30.608 [INFO][4989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ckpkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7698d12a-0689-4361-88b4-77840a78376a", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ckpkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08208f49ae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:30.654539 containerd[1625]: 2025-10-13 05:27:30.609 [INFO][4989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-eth0" Oct 13 05:27:30.654539 containerd[1625]: 2025-10-13 05:27:30.609 [INFO][4989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08208f49ae7 ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-eth0" Oct 13 05:27:30.654539 containerd[1625]: 2025-10-13 05:27:30.616 [INFO][4989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-eth0" Oct 13 05:27:30.654539 containerd[1625]: 2025-10-13 05:27:30.617 [INFO][4989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ckpkj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7698d12a-0689-4361-88b4-77840a78376a", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 26, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e", Pod:"csi-node-driver-ckpkj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08208f49ae7", MAC:"b6:4c:df:4f:41:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:27:30.654539 containerd[1625]: 2025-10-13 05:27:30.641 [INFO][4989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" Namespace="calico-system" Pod="csi-node-driver-ckpkj" WorkloadEndpoint="localhost-k8s-csi--node--driver--ckpkj-eth0" Oct 13 05:27:30.665656 sshd[5064]: Accepted publickey for core from 10.0.0.1 port 47114 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:30.668827 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:30.683731 systemd-logind[1600]: New session 9 of user core. Oct 13 05:27:30.692085 containerd[1625]: time="2025-10-13T05:27:30.692033237Z" level=info msg="StartContainer for \"96d3f47087e67842fcbcca99d0c782be4b48af96aaba7c8cb7170f6b5a18f73c\" returns successfully" Oct 13 05:27:30.707724 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:27:30.730097 containerd[1625]: time="2025-10-13T05:27:30.729734560Z" level=info msg="connecting to shim 4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e" address="unix:///run/containerd/s/2421a57bb9efa2be4cb2cab97e5660ba4ddfe8ea4732b41529021fd8f76551fb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:27:30.778840 systemd[1]: Started cri-containerd-4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e.scope - libcontainer container 4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e. Oct 13 05:27:30.797672 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:27:30.951279 containerd[1625]: time="2025-10-13T05:27:30.951129882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckpkj,Uid:7698d12a-0689-4361-88b4-77840a78376a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e\"" Oct 13 05:27:30.989453 sshd[5118]: Connection closed by 10.0.0.1 port 47114 Oct 13 05:27:30.989800 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:30.995680 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:47114.service: Deactivated successfully. Oct 13 05:27:30.998193 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:27:30.999270 systemd-logind[1600]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:27:31.000956 systemd-logind[1600]: Removed session 9. Oct 13 05:27:31.167290 kubelet[2786]: E1013 05:27:31.167238 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:31.181394 kubelet[2786]: E1013 05:27:31.181258 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:31.203797 kubelet[2786]: I1013 05:27:31.203717 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h9w82" podStartSLOduration=65.203697235 podStartE2EDuration="1m5.203697235s" podCreationTimestamp="2025-10-13 05:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:27:31.182844835 +0000 UTC m=+70.973861134" watchObservedRunningTime="2025-10-13 05:27:31.203697235 +0000 UTC m=+70.994713524" Oct 13 05:27:31.234261 kubelet[2786]: I1013 05:27:31.233803 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n8c44" podStartSLOduration=65.233781553 podStartE2EDuration="1m5.233781553s" podCreationTimestamp="2025-10-13 05:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:27:31.220722607 +0000 UTC m=+71.011738896" watchObservedRunningTime="2025-10-13 05:27:31.233781553 +0000 UTC m=+71.024797832" Oct 13 05:27:31.813592 systemd-networkd[1518]: cali08208f49ae7: Gained IPv6LL Oct 13 05:27:32.183076 kubelet[2786]: E1013 05:27:32.182950 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:32.184706 kubelet[2786]: E1013 05:27:32.184671 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:33.012510 containerd[1625]: time="2025-10-13T05:27:33.012433722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:33.045036 containerd[1625]: time="2025-10-13T05:27:33.044969819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:27:33.083640 containerd[1625]: time="2025-10-13T05:27:33.083583623Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:33.095968 containerd[1625]: time="2025-10-13T05:27:33.095890916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:33.096751 containerd[1625]: time="2025-10-13T05:27:33.096711495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 9.431563615s" Oct 13 05:27:33.096751 containerd[1625]: time="2025-10-13T05:27:33.096743316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:27:33.098153 containerd[1625]: time="2025-10-13T05:27:33.097929234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:27:33.191072 containerd[1625]: time="2025-10-13T05:27:33.191021810Z" level=info msg="CreateContainer within sandbox \"32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:27:33.191258 kubelet[2786]: E1013 05:27:33.191136 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:33.191258 kubelet[2786]: E1013 05:27:33.191246 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:33.376473 containerd[1625]: time="2025-10-13T05:27:33.376337708Z" level=info msg="Container 99e89a3910d877bda71f4f3254272ec1dd6c2495665c2d18587733fe587f6b64: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:33.510378 containerd[1625]: time="2025-10-13T05:27:33.510298731Z" level=info msg="CreateContainer within sandbox \"32cbe162d5a0a0caa193f0889385ea5b32350dbada6e402a21d9308e89094544\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99e89a3910d877bda71f4f3254272ec1dd6c2495665c2d18587733fe587f6b64\"" Oct 13 05:27:33.510873 containerd[1625]: time="2025-10-13T05:27:33.510849343Z" level=info msg="StartContainer for \"99e89a3910d877bda71f4f3254272ec1dd6c2495665c2d18587733fe587f6b64\"" Oct 13 05:27:33.511904 containerd[1625]: time="2025-10-13T05:27:33.511877631Z" level=info msg="connecting to shim 99e89a3910d877bda71f4f3254272ec1dd6c2495665c2d18587733fe587f6b64" address="unix:///run/containerd/s/ddd20a69822f363f3c99f8bf5dcd67a3c63f098879a945efed06c11d8d6e7eb6" protocol=ttrpc version=3 Oct 13 05:27:33.533511 systemd[1]: Started cri-containerd-99e89a3910d877bda71f4f3254272ec1dd6c2495665c2d18587733fe587f6b64.scope - libcontainer container 99e89a3910d877bda71f4f3254272ec1dd6c2495665c2d18587733fe587f6b64. Oct 13 05:27:33.872519 containerd[1625]: time="2025-10-13T05:27:33.872438391Z" level=info msg="StartContainer for \"99e89a3910d877bda71f4f3254272ec1dd6c2495665c2d18587733fe587f6b64\" returns successfully" Oct 13 05:27:34.201114 kubelet[2786]: I1013 05:27:34.200931 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5db6d8fb56-fz9kt" podStartSLOduration=45.767549542 podStartE2EDuration="55.200325031s" podCreationTimestamp="2025-10-13 05:26:39 +0000 UTC" firstStartedPulling="2025-10-13 05:27:23.6647167 +0000 UTC m=+63.455732989" lastFinishedPulling="2025-10-13 05:27:33.097492189 +0000 UTC m=+72.888508478" observedRunningTime="2025-10-13 05:27:34.199276716 +0000 UTC m=+73.990293015" watchObservedRunningTime="2025-10-13 05:27:34.200325031 +0000 UTC m=+73.991341320" Oct 13 05:27:36.005458 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:47140.service - OpenSSH per-connection server daemon (10.0.0.1:47140). Oct 13 05:27:36.093384 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 47140 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:36.094730 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:36.100203 systemd-logind[1600]: New session 10 of user core. Oct 13 05:27:36.110718 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:27:36.270708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596868162.mount: Deactivated successfully. Oct 13 05:27:36.696911 sshd[5248]: Connection closed by 10.0.0.1 port 47140 Oct 13 05:27:36.697231 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:36.702571 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:47140.service: Deactivated successfully. Oct 13 05:27:36.705334 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:27:36.706406 systemd-logind[1600]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:27:36.709099 systemd-logind[1600]: Removed session 10. Oct 13 05:27:39.319804 kubelet[2786]: E1013 05:27:39.319739 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:41.016403 containerd[1625]: time="2025-10-13T05:27:41.016262756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:41.057567 containerd[1625]: time="2025-10-13T05:27:41.038851129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:27:41.120196 containerd[1625]: time="2025-10-13T05:27:41.120125369Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:41.184316 containerd[1625]: time="2025-10-13T05:27:41.184260782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:41.185022 containerd[1625]: time="2025-10-13T05:27:41.184996414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.087032963s" Oct 13 05:27:41.185099 containerd[1625]: time="2025-10-13T05:27:41.185025740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:27:41.186551 containerd[1625]: time="2025-10-13T05:27:41.186495822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:27:41.310122 containerd[1625]: time="2025-10-13T05:27:41.309924237Z" level=info msg="CreateContainer within sandbox \"5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:27:41.711550 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:52824.service - OpenSSH per-connection server daemon (10.0.0.1:52824). Oct 13 05:27:41.801131 containerd[1625]: time="2025-10-13T05:27:41.801076139Z" level=info msg="Container c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:41.841749 sshd[5273]: Accepted publickey for core from 10.0.0.1 port 52824 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:41.843704 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:41.849163 systemd-logind[1600]: New session 11 of user core. Oct 13 05:27:41.861861 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:27:42.052674 sshd[5276]: Connection closed by 10.0.0.1 port 52824 Oct 13 05:27:42.053006 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:42.058210 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:52824.service: Deactivated successfully. Oct 13 05:27:42.060561 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:27:42.061561 systemd-logind[1600]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:27:42.063165 systemd-logind[1600]: Removed session 11. Oct 13 05:27:42.437446 containerd[1625]: time="2025-10-13T05:27:42.437235752Z" level=info msg="CreateContainer within sandbox \"5f79049520bcd3ce081bb4a3c39772e33b07eefd996e501a12c27ffd81eb39f4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613\"" Oct 13 05:27:42.438386 containerd[1625]: time="2025-10-13T05:27:42.438325329Z" level=info msg="StartContainer for \"c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613\"" Oct 13 05:27:42.440132 containerd[1625]: time="2025-10-13T05:27:42.440099688Z" level=info msg="connecting to shim c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613" address="unix:///run/containerd/s/f4a77e284941bb25018fe725c5e637c9e0dd01aa991e556b54c1d5863b33a6d0" protocol=ttrpc version=3 Oct 13 05:27:42.465510 systemd[1]: Started cri-containerd-c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613.scope - libcontainer container c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613. Oct 13 05:27:43.328933 containerd[1625]: time="2025-10-13T05:27:43.328881931Z" level=info msg="StartContainer for \"c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613\" returns successfully" Oct 13 05:27:43.339715 update_engine[1602]: I20251013 05:27:43.339629 1602 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 13 05:27:43.339715 update_engine[1602]: I20251013 05:27:43.339696 1602 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.341524 1602 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342366 1602 omaha_request_params.cc:62] Current group set to alpha Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342557 1602 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342566 1602 update_attempter.cc:643] Scheduling an action processor start. Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342585 1602 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342631 1602 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342691 1602 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342700 1602 omaha_request_action.cc:272] Request: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: Oct 13 05:27:43.352572 update_engine[1602]: I20251013 05:27:43.342706 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 13 05:27:43.352931 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 13 05:27:43.353165 update_engine[1602]: I20251013 05:27:43.352938 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 13 05:27:43.353668 update_engine[1602]: I20251013 05:27:43.353613 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 13 05:27:43.364331 update_engine[1602]: E20251013 05:27:43.364288 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Oct 13 05:27:43.364397 update_engine[1602]: I20251013 05:27:43.364363 1602 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 13 05:27:44.454671 containerd[1625]: time="2025-10-13T05:27:44.454558750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613\" id:\"812812d485112c11b2c197a32b566a3675b19cf346ee3ee9700fe0981dac24b6\" pid:5335 exit_status:1 exited_at:{seconds:1760333264 nanos:453819983}" Oct 13 05:27:45.320247 kubelet[2786]: E1013 05:27:45.320184 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:45.415625 containerd[1625]: time="2025-10-13T05:27:45.415570614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613\" id:\"f19cd8e02b6d88867adfe1b84acbd1f33badba399cd564c92513d96b92278f41\" pid:5365 exit_status:1 exited_at:{seconds:1760333265 nanos:415067927}" Oct 13 05:27:46.357285 containerd[1625]: time="2025-10-13T05:27:46.357190198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:46.358785 containerd[1625]: time="2025-10-13T05:27:46.358757481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:27:46.360710 containerd[1625]: time="2025-10-13T05:27:46.360644010Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:46.363593 containerd[1625]: time="2025-10-13T05:27:46.363511926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:46.364276 containerd[1625]: time="2025-10-13T05:27:46.364237106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 5.1776974s" Oct 13 05:27:46.364335 containerd[1625]: time="2025-10-13T05:27:46.364288403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:27:46.367226 containerd[1625]: time="2025-10-13T05:27:46.366183117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:27:46.373432 containerd[1625]: time="2025-10-13T05:27:46.373385099Z" level=info msg="CreateContainer within sandbox \"e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:27:46.588336 containerd[1625]: time="2025-10-13T05:27:46.586446755Z" level=info msg="Container 5f7b7db159259f70c1a1296d7fc5d61cd795180e7ad73a50afb30c56f8b00529: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:46.649900 containerd[1625]: time="2025-10-13T05:27:46.649737557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613\" id:\"b0d04dd7e1a82eb92af52040f7ddd02d44b7381968bba9bcbb6ba81f13e80301\" pid:5399 exited_at:{seconds:1760333266 nanos:648990114}" Oct 13 05:27:47.064554 containerd[1625]: time="2025-10-13T05:27:47.063653771Z" level=info msg="CreateContainer within sandbox \"e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5f7b7db159259f70c1a1296d7fc5d61cd795180e7ad73a50afb30c56f8b00529\"" Oct 13 05:27:47.067203 containerd[1625]: time="2025-10-13T05:27:47.066327445Z" level=info msg="StartContainer for \"5f7b7db159259f70c1a1296d7fc5d61cd795180e7ad73a50afb30c56f8b00529\"" Oct 13 05:27:47.068223 containerd[1625]: time="2025-10-13T05:27:47.068168698Z" level=info msg="connecting to shim 5f7b7db159259f70c1a1296d7fc5d61cd795180e7ad73a50afb30c56f8b00529" address="unix:///run/containerd/s/26550d6b77855dbaf0460a7b99e77d2219e7fab6337b94fca44bf587d02d4b81" protocol=ttrpc version=3 Oct 13 05:27:47.072770 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:33752.service - OpenSSH per-connection server daemon (10.0.0.1:33752). Oct 13 05:27:47.101908 systemd[1]: Started cri-containerd-5f7b7db159259f70c1a1296d7fc5d61cd795180e7ad73a50afb30c56f8b00529.scope - libcontainer container 5f7b7db159259f70c1a1296d7fc5d61cd795180e7ad73a50afb30c56f8b00529. Oct 13 05:27:47.159264 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 33752 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:47.161396 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:47.168658 systemd-logind[1600]: New session 12 of user core. Oct 13 05:27:47.174251 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:27:47.189874 containerd[1625]: time="2025-10-13T05:27:47.189816969Z" level=info msg="StartContainer for \"5f7b7db159259f70c1a1296d7fc5d61cd795180e7ad73a50afb30c56f8b00529\" returns successfully" Oct 13 05:27:47.259142 containerd[1625]: time="2025-10-13T05:27:47.259061384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8\" id:\"80edbfc7d7577b017f237c57aee52825ee6b33bef2ce5e6313e5bc38e601f951\" pid:5450 exited_at:{seconds:1760333267 nanos:258703313}" Oct 13 05:27:47.284846 kubelet[2786]: I1013 05:27:47.284616 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-57bv6" podStartSLOduration=50.892010428 podStartE2EDuration="1m5.284590398s" podCreationTimestamp="2025-10-13 05:26:42 +0000 UTC" firstStartedPulling="2025-10-13 05:27:26.79372104 +0000 UTC m=+66.584737329" lastFinishedPulling="2025-10-13 05:27:41.18630101 +0000 UTC m=+80.977317299" observedRunningTime="2025-10-13 05:27:44.411536913 +0000 UTC m=+84.202553223" watchObservedRunningTime="2025-10-13 05:27:47.284590398 +0000 UTC m=+87.075606687" Oct 13 05:27:47.330664 sshd[5460]: Connection closed by 10.0.0.1 port 33752 Oct 13 05:27:47.330945 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:47.336295 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:33752.service: Deactivated successfully. Oct 13 05:27:47.338774 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:27:47.340169 systemd-logind[1600]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:27:47.341795 systemd-logind[1600]: Removed session 12. Oct 13 05:27:48.320159 kubelet[2786]: E1013 05:27:48.320108 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:50.499429 containerd[1625]: time="2025-10-13T05:27:50.499367208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:50.503532 containerd[1625]: time="2025-10-13T05:27:50.503500522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:27:50.508747 containerd[1625]: time="2025-10-13T05:27:50.508698669Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:50.510742 containerd[1625]: time="2025-10-13T05:27:50.510683781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:50.511282 containerd[1625]: time="2025-10-13T05:27:50.511229067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.145006695s" Oct 13 05:27:50.511282 containerd[1625]: time="2025-10-13T05:27:50.511258393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:27:50.511976 containerd[1625]: time="2025-10-13T05:27:50.511933636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:27:50.535906 containerd[1625]: time="2025-10-13T05:27:50.535850139Z" level=info msg="CreateContainer within sandbox \"739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:27:50.545607 containerd[1625]: time="2025-10-13T05:27:50.545554258Z" level=info msg="Container db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:50.559726 containerd[1625]: time="2025-10-13T05:27:50.559667104Z" level=info msg="CreateContainer within sandbox \"739ee40f2ffbe47f5c0297e63f689984377895fb8bb3263b1ff817e2638c1eea\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8\"" Oct 13 05:27:50.561475 containerd[1625]: time="2025-10-13T05:27:50.560207160Z" level=info msg="StartContainer for \"db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8\"" Oct 13 05:27:50.561475 containerd[1625]: time="2025-10-13T05:27:50.561310216Z" level=info msg="connecting to shim db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8" address="unix:///run/containerd/s/e2b4f17b5fdc26b6f12cfaff0738b84afec0e4fe712436d9af4968ab4a11b4cb" protocol=ttrpc version=3 Oct 13 05:27:50.588541 systemd[1]: Started cri-containerd-db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8.scope - libcontainer container db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8. Oct 13 05:27:50.645241 containerd[1625]: time="2025-10-13T05:27:50.645172443Z" level=info msg="StartContainer for \"db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8\" returns successfully" Oct 13 05:27:51.174737 containerd[1625]: time="2025-10-13T05:27:51.174617435Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:51.176004 containerd[1625]: time="2025-10-13T05:27:51.175912115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:27:51.178043 containerd[1625]: time="2025-10-13T05:27:51.177957680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 665.996753ms" Oct 13 05:27:51.178043 containerd[1625]: time="2025-10-13T05:27:51.178023024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:27:51.180450 containerd[1625]: time="2025-10-13T05:27:51.180411943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:27:51.186884 containerd[1625]: time="2025-10-13T05:27:51.186822920Z" level=info msg="CreateContainer within sandbox \"9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:27:51.196374 containerd[1625]: time="2025-10-13T05:27:51.196290113Z" level=info msg="Container a502ef1f460a7f97dcbb56afcb1ee3c8620d61f596dfea5a23874bf51c1f57a3: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:51.210645 containerd[1625]: time="2025-10-13T05:27:51.210578104Z" level=info msg="CreateContainer within sandbox \"9402a98f65d2656923ad9fe30e81624c28701161995c551d7bbef2769c19f38b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a502ef1f460a7f97dcbb56afcb1ee3c8620d61f596dfea5a23874bf51c1f57a3\"" Oct 13 05:27:51.211192 containerd[1625]: time="2025-10-13T05:27:51.211167113Z" level=info msg="StartContainer for \"a502ef1f460a7f97dcbb56afcb1ee3c8620d61f596dfea5a23874bf51c1f57a3\"" Oct 13 05:27:51.212843 containerd[1625]: time="2025-10-13T05:27:51.212582111Z" level=info msg="connecting to shim a502ef1f460a7f97dcbb56afcb1ee3c8620d61f596dfea5a23874bf51c1f57a3" address="unix:///run/containerd/s/216601163b99ab31a83d55bb1e06bab5d16829940e92986862f196b286abb2cb" protocol=ttrpc version=3 Oct 13 05:27:51.238585 systemd[1]: Started cri-containerd-a502ef1f460a7f97dcbb56afcb1ee3c8620d61f596dfea5a23874bf51c1f57a3.scope - libcontainer container a502ef1f460a7f97dcbb56afcb1ee3c8620d61f596dfea5a23874bf51c1f57a3. Oct 13 05:27:51.294607 containerd[1625]: time="2025-10-13T05:27:51.294547468Z" level=info msg="StartContainer for \"a502ef1f460a7f97dcbb56afcb1ee3c8620d61f596dfea5a23874bf51c1f57a3\" returns successfully" Oct 13 05:27:51.320580 kubelet[2786]: E1013 05:27:51.320535 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:27:51.398192 kubelet[2786]: I1013 05:27:51.398109 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75f997d669-22b4j" podStartSLOduration=48.1698323 podStartE2EDuration="1m8.398089731s" podCreationTimestamp="2025-10-13 05:26:43 +0000 UTC" firstStartedPulling="2025-10-13 05:27:30.283561847 +0000 UTC m=+70.074578136" lastFinishedPulling="2025-10-13 05:27:50.511819278 +0000 UTC m=+90.302835567" observedRunningTime="2025-10-13 05:27:51.377027042 +0000 UTC m=+91.168043331" watchObservedRunningTime="2025-10-13 05:27:51.398089731 +0000 UTC m=+91.189106010" Oct 13 05:27:51.399559 kubelet[2786]: I1013 05:27:51.399318 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5db6d8fb56-58p6w" podStartSLOduration=51.724971363 podStartE2EDuration="1m12.399307695s" podCreationTimestamp="2025-10-13 05:26:39 +0000 UTC" firstStartedPulling="2025-10-13 05:27:30.505919844 +0000 UTC m=+70.296936133" lastFinishedPulling="2025-10-13 05:27:51.180256176 +0000 UTC m=+90.971272465" observedRunningTime="2025-10-13 05:27:51.397970825 +0000 UTC m=+91.188987114" watchObservedRunningTime="2025-10-13 05:27:51.399307695 +0000 UTC m=+91.190323984" Oct 13 05:27:51.433544 containerd[1625]: time="2025-10-13T05:27:51.433190407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8\" id:\"7c50b8babe89a7d90fcaa7453d53a6ef754e7cd2bcbae43ab368f41e1181de19\" pid:5584 exited_at:{seconds:1760333271 nanos:432662263}" Oct 13 05:27:52.345632 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Oct 13 05:27:52.455674 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:52.458093 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:52.464132 systemd-logind[1600]: New session 13 of user core. Oct 13 05:27:52.472613 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:27:52.673883 sshd[5605]: Connection closed by 10.0.0.1 port 33786 Oct 13 05:27:52.675601 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:52.686108 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:33786.service: Deactivated successfully. Oct 13 05:27:52.690041 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:27:52.691330 systemd-logind[1600]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:27:52.696198 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:33800.service - OpenSSH per-connection server daemon (10.0.0.1:33800). Oct 13 05:27:52.697634 systemd-logind[1600]: Removed session 13. Oct 13 05:27:52.758999 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 33800 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:52.761321 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:52.771576 systemd-logind[1600]: New session 14 of user core. Oct 13 05:27:52.789669 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:27:53.115036 sshd[5623]: Connection closed by 10.0.0.1 port 33800 Oct 13 05:27:53.115767 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:53.123197 containerd[1625]: time="2025-10-13T05:27:53.123136163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:53.126499 containerd[1625]: time="2025-10-13T05:27:53.126435858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:27:53.127706 containerd[1625]: time="2025-10-13T05:27:53.127635395Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:53.130158 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:33800.service: Deactivated successfully. Oct 13 05:27:53.133517 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:27:53.135594 systemd-logind[1600]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:27:53.136921 containerd[1625]: time="2025-10-13T05:27:53.136747875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:53.140197 containerd[1625]: time="2025-10-13T05:27:53.139790842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.95933093s" Oct 13 05:27:53.140197 containerd[1625]: time="2025-10-13T05:27:53.139845786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:27:53.142992 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:33808.service - OpenSSH per-connection server daemon (10.0.0.1:33808). Oct 13 05:27:53.144281 containerd[1625]: time="2025-10-13T05:27:53.143243678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:27:53.145080 systemd-logind[1600]: Removed session 14. Oct 13 05:27:53.159827 containerd[1625]: time="2025-10-13T05:27:53.159054124Z" level=info msg="CreateContainer within sandbox \"4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:27:53.206161 containerd[1625]: time="2025-10-13T05:27:53.205547089Z" level=info msg="Container 91d0fad790664f58e3812758d645fb8ffddc6621b394398704af4894ac4831c2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:53.219888 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 33808 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:53.222115 sshd-session[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:53.225214 containerd[1625]: time="2025-10-13T05:27:53.225163050Z" level=info msg="CreateContainer within sandbox \"4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"91d0fad790664f58e3812758d645fb8ffddc6621b394398704af4894ac4831c2\"" Oct 13 05:27:53.227507 containerd[1625]: time="2025-10-13T05:27:53.227469910Z" level=info msg="StartContainer for \"91d0fad790664f58e3812758d645fb8ffddc6621b394398704af4894ac4831c2\"" Oct 13 05:27:53.230629 systemd-logind[1600]: New session 15 of user core. Oct 13 05:27:53.231739 containerd[1625]: time="2025-10-13T05:27:53.231025068Z" level=info msg="connecting to shim 91d0fad790664f58e3812758d645fb8ffddc6621b394398704af4894ac4831c2" address="unix:///run/containerd/s/2421a57bb9efa2be4cb2cab97e5660ba4ddfe8ea4732b41529021fd8f76551fb" protocol=ttrpc version=3 Oct 13 05:27:53.238583 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:27:53.257560 systemd[1]: Started cri-containerd-91d0fad790664f58e3812758d645fb8ffddc6621b394398704af4894ac4831c2.scope - libcontainer container 91d0fad790664f58e3812758d645fb8ffddc6621b394398704af4894ac4831c2. Oct 13 05:27:53.307112 update_engine[1602]: I20251013 05:27:53.303742 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 13 05:27:53.308075 update_engine[1602]: I20251013 05:27:53.307612 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 13 05:27:53.311067 update_engine[1602]: I20251013 05:27:53.310982 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 13 05:27:53.319840 update_engine[1602]: E20251013 05:27:53.319705 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Oct 13 05:27:53.319840 update_engine[1602]: I20251013 05:27:53.319805 1602 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 13 05:27:53.371598 containerd[1625]: time="2025-10-13T05:27:53.371441186Z" level=info msg="StartContainer for \"91d0fad790664f58e3812758d645fb8ffddc6621b394398704af4894ac4831c2\" returns successfully" Oct 13 05:27:53.499659 sshd[5656]: Connection closed by 10.0.0.1 port 33808 Oct 13 05:27:53.500114 sshd-session[5640]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:53.506423 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:33808.service: Deactivated successfully. Oct 13 05:27:53.508950 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:27:53.511043 systemd-logind[1600]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:27:53.512893 systemd-logind[1600]: Removed session 15. Oct 13 05:27:56.697346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968682539.mount: Deactivated successfully. Oct 13 05:27:56.903524 containerd[1625]: time="2025-10-13T05:27:56.903395423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:56.904729 containerd[1625]: time="2025-10-13T05:27:56.904695218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:27:56.906219 containerd[1625]: time="2025-10-13T05:27:56.906180786Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:56.908725 containerd[1625]: time="2025-10-13T05:27:56.908688995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:27:56.909507 containerd[1625]: time="2025-10-13T05:27:56.909484273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.766186363s" Oct 13 05:27:56.909507 containerd[1625]: time="2025-10-13T05:27:56.909513298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:27:56.910574 containerd[1625]: time="2025-10-13T05:27:56.910506322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:27:56.916269 containerd[1625]: time="2025-10-13T05:27:56.916214821Z" level=info msg="CreateContainer within sandbox \"e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:27:56.927221 containerd[1625]: time="2025-10-13T05:27:56.927170443Z" level=info msg="Container 685ca1e3f62eefbb7720ae9bf8881ab19d8fdaa12eed526be8f2f48d8841f787: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:27:56.938135 containerd[1625]: time="2025-10-13T05:27:56.938104877Z" level=info msg="CreateContainer within sandbox \"e123e457488e3ed886f5c3b740376abce33ddd927b574a32fe19b8cea414006b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"685ca1e3f62eefbb7720ae9bf8881ab19d8fdaa12eed526be8f2f48d8841f787\"" Oct 13 05:27:56.938900 containerd[1625]: time="2025-10-13T05:27:56.938878133Z" level=info msg="StartContainer for \"685ca1e3f62eefbb7720ae9bf8881ab19d8fdaa12eed526be8f2f48d8841f787\"" Oct 13 05:27:56.940658 containerd[1625]: time="2025-10-13T05:27:56.940529546Z" level=info msg="connecting to shim 685ca1e3f62eefbb7720ae9bf8881ab19d8fdaa12eed526be8f2f48d8841f787" address="unix:///run/containerd/s/26550d6b77855dbaf0460a7b99e77d2219e7fab6337b94fca44bf587d02d4b81" protocol=ttrpc version=3 Oct 13 05:27:56.979638 systemd[1]: Started cri-containerd-685ca1e3f62eefbb7720ae9bf8881ab19d8fdaa12eed526be8f2f48d8841f787.scope - libcontainer container 685ca1e3f62eefbb7720ae9bf8881ab19d8fdaa12eed526be8f2f48d8841f787. Oct 13 05:27:57.034866 containerd[1625]: time="2025-10-13T05:27:57.034803559Z" level=info msg="StartContainer for \"685ca1e3f62eefbb7720ae9bf8881ab19d8fdaa12eed526be8f2f48d8841f787\" returns successfully" Oct 13 05:27:58.514325 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:48622.service - OpenSSH per-connection server daemon (10.0.0.1:48622). Oct 13 05:27:58.623895 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 48622 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:27:58.634175 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:58.639505 systemd-logind[1600]: New session 16 of user core. Oct 13 05:27:58.648533 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:27:59.046905 sshd[5739]: Connection closed by 10.0.0.1 port 48622 Oct 13 05:27:59.053087 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:48622.service: Deactivated successfully. Oct 13 05:27:59.047262 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:59.056035 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:27:59.057184 systemd-logind[1600]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:27:59.060160 systemd-logind[1600]: Removed session 16. Oct 13 05:28:00.995097 containerd[1625]: time="2025-10-13T05:28:00.994982802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:00.995921 containerd[1625]: time="2025-10-13T05:28:00.995879892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:28:01.002966 containerd[1625]: time="2025-10-13T05:28:01.002884357Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:01.005479 containerd[1625]: time="2025-10-13T05:28:01.005384885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:28:01.006107 containerd[1625]: time="2025-10-13T05:28:01.006054473Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 4.095317464s" Oct 13 05:28:01.006107 containerd[1625]: time="2025-10-13T05:28:01.006108064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:28:01.013298 containerd[1625]: time="2025-10-13T05:28:01.013240810Z" level=info msg="CreateContainer within sandbox \"4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:28:01.025560 containerd[1625]: time="2025-10-13T05:28:01.025494038Z" level=info msg="Container 5fba272f80b537efbc50f2a89b7edb1e7e84ce0a1c8211a7fc408ecce3c27bca: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:28:01.037839 containerd[1625]: time="2025-10-13T05:28:01.037770821Z" level=info msg="CreateContainer within sandbox \"4280712cec38210b0ccf0ae89c4996ed1665bb0f2184745818126b54072a9b2e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5fba272f80b537efbc50f2a89b7edb1e7e84ce0a1c8211a7fc408ecce3c27bca\"" Oct 13 05:28:01.038470 containerd[1625]: time="2025-10-13T05:28:01.038432094Z" level=info msg="StartContainer for \"5fba272f80b537efbc50f2a89b7edb1e7e84ce0a1c8211a7fc408ecce3c27bca\"" Oct 13 05:28:01.040302 containerd[1625]: time="2025-10-13T05:28:01.040210573Z" level=info msg="connecting to shim 5fba272f80b537efbc50f2a89b7edb1e7e84ce0a1c8211a7fc408ecce3c27bca" address="unix:///run/containerd/s/2421a57bb9efa2be4cb2cab97e5660ba4ddfe8ea4732b41529021fd8f76551fb" protocol=ttrpc version=3 Oct 13 05:28:01.072687 systemd[1]: Started cri-containerd-5fba272f80b537efbc50f2a89b7edb1e7e84ce0a1c8211a7fc408ecce3c27bca.scope - libcontainer container 5fba272f80b537efbc50f2a89b7edb1e7e84ce0a1c8211a7fc408ecce3c27bca. Oct 13 05:28:01.143900 containerd[1625]: time="2025-10-13T05:28:01.143803104Z" level=info msg="StartContainer for \"5fba272f80b537efbc50f2a89b7edb1e7e84ce0a1c8211a7fc408ecce3c27bca\" returns successfully" Oct 13 05:28:01.526759 kubelet[2786]: I1013 05:28:01.526040 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-849c4656db-l2tnk" podStartSLOduration=13.164601098 podStartE2EDuration="40.526010765s" podCreationTimestamp="2025-10-13 05:27:21 +0000 UTC" firstStartedPulling="2025-10-13 05:27:29.548921292 +0000 UTC m=+69.339937581" lastFinishedPulling="2025-10-13 05:27:56.910330959 +0000 UTC m=+96.701347248" observedRunningTime="2025-10-13 05:27:57.407622977 +0000 UTC m=+97.198639276" watchObservedRunningTime="2025-10-13 05:28:01.526010765 +0000 UTC m=+101.317027054" Oct 13 05:28:01.528231 kubelet[2786]: I1013 05:28:01.527142 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ckpkj" podStartSLOduration=48.473165356 podStartE2EDuration="1m18.527131979s" podCreationTimestamp="2025-10-13 05:26:43 +0000 UTC" firstStartedPulling="2025-10-13 05:27:30.953175301 +0000 UTC m=+70.744191580" lastFinishedPulling="2025-10-13 05:28:01.007141914 +0000 UTC m=+100.798158203" observedRunningTime="2025-10-13 05:28:01.525799524 +0000 UTC m=+101.316815803" watchObservedRunningTime="2025-10-13 05:28:01.527131979 +0000 UTC m=+101.318148268" Oct 13 05:28:01.530013 kubelet[2786]: I1013 05:28:01.529970 2786 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:28:01.534614 kubelet[2786]: I1013 05:28:01.534569 2786 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:28:03.298724 update_engine[1602]: I20251013 05:28:03.298594 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 13 05:28:03.298724 update_engine[1602]: I20251013 05:28:03.298725 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 13 05:28:03.299471 update_engine[1602]: I20251013 05:28:03.299170 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 13 05:28:03.309524 update_engine[1602]: E20251013 05:28:03.309448 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Oct 13 05:28:03.309635 update_engine[1602]: I20251013 05:28:03.309607 1602 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 13 05:28:04.064887 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:48626.service - OpenSSH per-connection server daemon (10.0.0.1:48626). Oct 13 05:28:04.162493 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 48626 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:04.165074 sshd-session[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:04.170301 systemd-logind[1600]: New session 17 of user core. Oct 13 05:28:04.180669 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:28:04.484297 sshd[5800]: Connection closed by 10.0.0.1 port 48626 Oct 13 05:28:04.484511 sshd-session[5797]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:04.489765 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:48626.service: Deactivated successfully. Oct 13 05:28:04.492458 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:28:04.493477 systemd-logind[1600]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:28:04.495452 systemd-logind[1600]: Removed session 17. Oct 13 05:28:07.319620 kubelet[2786]: E1013 05:28:07.319573 2786 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 13 05:28:09.499108 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:59892.service - OpenSSH per-connection server daemon (10.0.0.1:59892). Oct 13 05:28:09.563133 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 59892 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:09.564716 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:09.570059 systemd-logind[1600]: New session 18 of user core. Oct 13 05:28:09.576520 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:28:09.716883 sshd[5822]: Connection closed by 10.0.0.1 port 59892 Oct 13 05:28:09.717218 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:09.721735 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:59892.service: Deactivated successfully. Oct 13 05:28:09.724249 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:28:09.725241 systemd-logind[1600]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:28:09.726462 systemd-logind[1600]: Removed session 18. Oct 13 05:28:13.297516 update_engine[1602]: I20251013 05:28:13.297425 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 13 05:28:13.297954 update_engine[1602]: I20251013 05:28:13.297528 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 13 05:28:13.297978 update_engine[1602]: I20251013 05:28:13.297956 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 13 05:28:13.308961 update_engine[1602]: E20251013 05:28:13.308914 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Oct 13 05:28:13.309025 update_engine[1602]: I20251013 05:28:13.308969 1602 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 13 05:28:13.309025 update_engine[1602]: I20251013 05:28:13.308979 1602 omaha_request_action.cc:617] Omaha request response: Oct 13 05:28:13.309091 update_engine[1602]: E20251013 05:28:13.309067 1602 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318526 1602 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318575 1602 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318584 1602 update_attempter.cc:306] Processing Done. Oct 13 05:28:13.319309 update_engine[1602]: E20251013 05:28:13.318609 1602 update_attempter.cc:619] Update failed. Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318619 1602 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318627 1602 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318636 1602 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318747 1602 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318775 1602 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318783 1602 omaha_request_action.cc:272] Request: Oct 13 05:28:13.319309 update_engine[1602]: Oct 13 05:28:13.319309 update_engine[1602]: Oct 13 05:28:13.319309 update_engine[1602]: Oct 13 05:28:13.319309 update_engine[1602]: Oct 13 05:28:13.319309 update_engine[1602]: Oct 13 05:28:13.319309 update_engine[1602]: Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318792 1602 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.318843 1602 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 13 05:28:13.319309 update_engine[1602]: I20251013 05:28:13.319260 1602 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 13 05:28:13.322258 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 13 05:28:13.337048 update_engine[1602]: E20251013 05:28:13.337002 1602 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Oct 13 05:28:13.339368 update_engine[1602]: I20251013 05:28:13.337083 1602 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 13 05:28:13.339368 update_engine[1602]: I20251013 05:28:13.337092 1602 omaha_request_action.cc:617] Omaha request response: Oct 13 05:28:13.339368 update_engine[1602]: I20251013 05:28:13.337100 1602 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 13 05:28:13.339368 update_engine[1602]: I20251013 05:28:13.337106 1602 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 13 05:28:13.339368 update_engine[1602]: I20251013 05:28:13.337112 1602 update_attempter.cc:306] Processing Done. Oct 13 05:28:13.339368 update_engine[1602]: I20251013 05:28:13.337132 1602 update_attempter.cc:310] Error event sent. Oct 13 05:28:13.339368 update_engine[1602]: I20251013 05:28:13.337143 1602 update_check_scheduler.cc:74] Next update check in 40m47s Oct 13 05:28:13.339606 locksmithd[1653]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 13 05:28:14.740737 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:59900.service - OpenSSH per-connection server daemon (10.0.0.1:59900). Oct 13 05:28:14.814894 sshd[5838]: Accepted publickey for core from 10.0.0.1 port 59900 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:14.816931 sshd-session[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:14.822424 systemd-logind[1600]: New session 19 of user core. Oct 13 05:28:14.831518 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:28:14.978150 sshd[5841]: Connection closed by 10.0.0.1 port 59900 Oct 13 05:28:14.978793 sshd-session[5838]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:14.983835 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:59900.service: Deactivated successfully. Oct 13 05:28:14.986619 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:28:14.987414 systemd-logind[1600]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:28:14.988583 systemd-logind[1600]: Removed session 19. Oct 13 05:28:15.555007 containerd[1625]: time="2025-10-13T05:28:15.554958517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c39c170e5f92426a079dec069129ff4a8d9d1c55692b87cf8c6aafc141b3e613\" id:\"74f08da188f2f597451da58148a3c7ac85ac51756b3b995c3f2aaf6cb91de9d7\" pid:5865 exited_at:{seconds:1760333295 nanos:554546037}" Oct 13 05:28:17.229877 containerd[1625]: time="2025-10-13T05:28:17.229813798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ec7eb4cf43053db4970bd3ff0541e170d4c1b08acb6a94a886dbbebf05835e8\" id:\"86d58d7e221e6257c33358132f76ba4848c725bcba4be7cbeb3b6a68f8e0cadf\" pid:5891 exited_at:{seconds:1760333297 nanos:229383965}" Oct 13 05:28:19.996624 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:57814.service - OpenSSH per-connection server daemon (10.0.0.1:57814). Oct 13 05:28:20.073731 sshd[5905]: Accepted publickey for core from 10.0.0.1 port 57814 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:20.075349 sshd-session[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:20.080202 systemd-logind[1600]: New session 20 of user core. Oct 13 05:28:20.085531 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:28:20.321088 sshd[5908]: Connection closed by 10.0.0.1 port 57814 Oct 13 05:28:20.321450 sshd-session[5905]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:20.334411 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:57814.service: Deactivated successfully. Oct 13 05:28:20.337650 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:28:20.339240 systemd-logind[1600]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:28:20.343152 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:57822.service - OpenSSH per-connection server daemon (10.0.0.1:57822). Oct 13 05:28:20.345592 systemd-logind[1600]: Removed session 20. Oct 13 05:28:20.404550 sshd[5923]: Accepted publickey for core from 10.0.0.1 port 57822 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:20.406732 sshd-session[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:20.412056 systemd-logind[1600]: New session 21 of user core. Oct 13 05:28:20.427577 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:28:21.289347 sshd[5926]: Connection closed by 10.0.0.1 port 57822 Oct 13 05:28:21.290785 sshd-session[5923]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:21.304157 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:57822.service: Deactivated successfully. Oct 13 05:28:21.306742 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:28:21.307672 systemd-logind[1600]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:28:21.312572 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:57838.service - OpenSSH per-connection server daemon (10.0.0.1:57838). Oct 13 05:28:21.313302 systemd-logind[1600]: Removed session 21. Oct 13 05:28:21.405212 sshd[5938]: Accepted publickey for core from 10.0.0.1 port 57838 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:21.406902 sshd-session[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:21.411478 systemd-logind[1600]: New session 22 of user core. Oct 13 05:28:21.425498 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:28:21.492754 containerd[1625]: time="2025-10-13T05:28:21.492664940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8\" id:\"6be47b4038e26acde98d0c39d63ce1a18e7052664449939043877d3ad24cf424\" pid:5953 exited_at:{seconds:1760333301 nanos:491905265}" Oct 13 05:28:22.553643 sshd[5959]: Connection closed by 10.0.0.1 port 57838 Oct 13 05:28:22.554268 sshd-session[5938]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:22.568337 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:57838.service: Deactivated successfully. Oct 13 05:28:22.573212 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:28:22.578585 systemd-logind[1600]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:28:22.580270 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:57854.service - OpenSSH per-connection server daemon (10.0.0.1:57854). Oct 13 05:28:22.583425 systemd-logind[1600]: Removed session 22. Oct 13 05:28:22.642283 sshd[6003]: Accepted publickey for core from 10.0.0.1 port 57854 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:22.644772 sshd-session[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:22.650273 systemd-logind[1600]: New session 23 of user core. Oct 13 05:28:22.660600 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:28:23.434503 sshd[6006]: Connection closed by 10.0.0.1 port 57854 Oct 13 05:28:23.435249 sshd-session[6003]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:23.449313 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:57854.service: Deactivated successfully. Oct 13 05:28:23.452711 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:28:23.454759 systemd-logind[1600]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:28:23.460937 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:57866.service - OpenSSH per-connection server daemon (10.0.0.1:57866). Oct 13 05:28:23.463218 systemd-logind[1600]: Removed session 23. Oct 13 05:28:23.544485 sshd[6018]: Accepted publickey for core from 10.0.0.1 port 57866 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:23.546387 sshd-session[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:23.552317 systemd-logind[1600]: New session 24 of user core. Oct 13 05:28:23.562570 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:28:23.702028 sshd[6021]: Connection closed by 10.0.0.1 port 57866 Oct 13 05:28:23.702420 sshd-session[6018]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:23.708029 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:57866.service: Deactivated successfully. Oct 13 05:28:23.710962 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:28:23.712781 systemd-logind[1600]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:28:23.714774 systemd-logind[1600]: Removed session 24. Oct 13 05:28:28.718833 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:46748.service - OpenSSH per-connection server daemon (10.0.0.1:46748). Oct 13 05:28:28.785093 sshd[6036]: Accepted publickey for core from 10.0.0.1 port 46748 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:28.787278 sshd-session[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:28.792794 systemd-logind[1600]: New session 25 of user core. Oct 13 05:28:28.801512 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 05:28:28.919158 sshd[6039]: Connection closed by 10.0.0.1 port 46748 Oct 13 05:28:28.919539 sshd-session[6036]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:28.924680 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:46748.service: Deactivated successfully. Oct 13 05:28:28.926999 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 05:28:28.927864 systemd-logind[1600]: Session 25 logged out. Waiting for processes to exit. Oct 13 05:28:28.929396 systemd-logind[1600]: Removed session 25. Oct 13 05:28:33.935299 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:46796.service - OpenSSH per-connection server daemon (10.0.0.1:46796). Oct 13 05:28:34.033031 sshd[6054]: Accepted publickey for core from 10.0.0.1 port 46796 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:34.036281 sshd-session[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:34.043699 systemd-logind[1600]: New session 26 of user core. Oct 13 05:28:34.053719 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 13 05:28:34.178215 sshd[6057]: Connection closed by 10.0.0.1 port 46796 Oct 13 05:28:34.178586 sshd-session[6054]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:34.185416 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:46796.service: Deactivated successfully. Oct 13 05:28:34.188952 systemd[1]: session-26.scope: Deactivated successfully. Oct 13 05:28:34.191136 systemd-logind[1600]: Session 26 logged out. Waiting for processes to exit. Oct 13 05:28:34.192990 systemd-logind[1600]: Removed session 26. Oct 13 05:28:39.191044 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:41698.service - OpenSSH per-connection server daemon (10.0.0.1:41698). Oct 13 05:28:39.283328 sshd[6071]: Accepted publickey for core from 10.0.0.1 port 41698 ssh2: RSA SHA256:Qeb/EGktMrqpsXfonWiD53/vBDBZXY0fZnQTqYv7o0w Oct 13 05:28:39.285552 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:28:39.290726 systemd-logind[1600]: New session 27 of user core. Oct 13 05:28:39.300520 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 13 05:28:39.521084 sshd[6074]: Connection closed by 10.0.0.1 port 41698 Oct 13 05:28:39.523692 sshd-session[6071]: pam_unix(sshd:session): session closed for user core Oct 13 05:28:39.529524 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:41698.service: Deactivated successfully. Oct 13 05:28:39.532102 systemd[1]: session-27.scope: Deactivated successfully. Oct 13 05:28:39.533111 systemd-logind[1600]: Session 27 logged out. Waiting for processes to exit. Oct 13 05:28:39.534827 systemd-logind[1600]: Removed session 27. Oct 13 05:28:41.064457 containerd[1625]: time="2025-10-13T05:28:41.064397694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db430ecf8e7cb7796653cab1e2c2a34404b17aa7b3c2abbd2c12e46b173550d8\" id:\"183eb102ef27e50345ce1ab1c245663aa51b112cb7ccd15510136abbf8647975\" pid:6100 exited_at:{seconds:1760333321 nanos:64111283}"