Sep 11 00:27:47.889674 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:27:47.889733 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:27:47.889751 kernel: BIOS-provided physical RAM map: Sep 11 00:27:47.889776 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 11 00:27:47.889785 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 11 00:27:47.889794 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 11 00:27:47.889805 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 11 00:27:47.889814 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 11 00:27:47.889827 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 11 00:27:47.889836 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 11 00:27:47.889851 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 11 00:27:47.889865 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 11 00:27:47.889874 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 11 00:27:47.889883 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 11 00:27:47.889894 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 11 00:27:47.889904 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 11 00:27:47.889929 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:27:47.889939 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:27:47.889949 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:27:47.889958 kernel: NX (Execute Disable) protection: active Sep 11 00:27:47.889967 kernel: APIC: Static calls initialized Sep 11 00:27:47.889976 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 11 00:27:47.889985 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 11 00:27:47.889993 kernel: extended physical RAM map: Sep 11 00:27:47.890000 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 11 00:27:47.890007 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 11 00:27:47.890014 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 11 00:27:47.890026 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 11 00:27:47.890036 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 11 00:27:47.890046 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 11 00:27:47.890055 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 11 00:27:47.890065 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 11 00:27:47.890075 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 11 00:27:47.890084 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 11 00:27:47.890091 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 11 00:27:47.890098 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 11 00:27:47.890105 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 11 00:27:47.890112 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 11 00:27:47.890123 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 11 00:27:47.890130 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 11 00:27:47.890141 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 11 00:27:47.890148 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 11 00:27:47.890156 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:27:47.890163 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 11 00:27:47.890172 kernel: efi: EFI v2.7 by EDK II Sep 11 00:27:47.890180 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 11 00:27:47.890187 kernel: random: crng init done Sep 11 00:27:47.890194 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 11 00:27:47.890202 kernel: secureboot: Secure boot enabled Sep 11 00:27:47.890209 kernel: SMBIOS 2.8 present. Sep 11 00:27:47.890216 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 11 00:27:47.890223 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:27:47.890231 kernel: Hypervisor detected: KVM Sep 11 00:27:47.890238 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:27:47.890248 kernel: kvm-clock: using sched offset of 7089537639 cycles Sep 11 00:27:47.890255 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:27:47.890263 kernel: tsc: Detected 2794.750 MHz processor Sep 11 00:27:47.890271 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:27:47.890278 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:27:47.890285 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 11 00:27:47.890293 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 11 00:27:47.890306 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:27:47.890313 kernel: Using GB pages for direct mapping Sep 11 00:27:47.890323 kernel: ACPI: Early table checksum verification disabled Sep 11 00:27:47.890333 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 11 00:27:47.890340 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:27:47.890348 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:27:47.890355 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:27:47.890363 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 11 00:27:47.890370 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:27:47.890378 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:27:47.890385 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:27:47.890395 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:27:47.890402 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 11 00:27:47.890410 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 11 00:27:47.890417 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 11 00:27:47.890425 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 11 00:27:47.890432 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 11 00:27:47.890440 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 11 00:27:47.890447 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 11 00:27:47.890454 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 11 00:27:47.890464 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 11 00:27:47.890471 kernel: No NUMA configuration found Sep 11 00:27:47.890479 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 11 00:27:47.890486 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 11 00:27:47.890494 kernel: Zone ranges: Sep 11 00:27:47.890501 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:27:47.890509 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 11 00:27:47.890516 kernel: Normal empty Sep 11 00:27:47.890523 kernel: Device empty Sep 11 00:27:47.890531 kernel: Movable zone start for each node Sep 11 00:27:47.890540 kernel: Early memory node ranges Sep 11 00:27:47.890548 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 11 00:27:47.890555 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 11 00:27:47.890562 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 11 00:27:47.890570 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 11 00:27:47.890577 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 11 00:27:47.890584 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 11 00:27:47.890592 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:27:47.890599 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 11 00:27:47.890609 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 11 00:27:47.890617 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 11 00:27:47.890624 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 11 00:27:47.890632 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 11 00:27:47.890639 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:27:47.890647 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:27:47.890654 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:27:47.890661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:27:47.890669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:27:47.890692 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:27:47.890704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:27:47.890725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:27:47.890734 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:27:47.890742 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:27:47.890749 kernel: TSC deadline timer available Sep 11 00:27:47.890757 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:27:47.890785 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:27:47.890805 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:27:47.890825 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:27:47.890836 kernel: CPU topo: Num. cores per package: 4 Sep 11 00:27:47.890847 kernel: CPU topo: Num. threads per package: 4 Sep 11 00:27:47.890860 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 11 00:27:47.890875 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:27:47.890886 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 11 00:27:47.890897 kernel: kvm-guest: setup PV sched yield Sep 11 00:27:47.890908 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 11 00:27:47.890934 kernel: Booting paravirtualized kernel on KVM Sep 11 00:27:47.890945 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:27:47.890956 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 11 00:27:47.890967 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 11 00:27:47.890977 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 11 00:27:47.890988 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 11 00:27:47.890999 kernel: kvm-guest: PV spinlocks enabled Sep 11 00:27:47.891010 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 11 00:27:47.891022 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:27:47.891038 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:27:47.891048 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:27:47.891060 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:27:47.891070 kernel: Fallback order for Node 0: 0 Sep 11 00:27:47.891081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 11 00:27:47.891092 kernel: Policy zone: DMA32 Sep 11 00:27:47.891102 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:27:47.891113 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:27:47.891127 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:27:47.891138 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:27:47.891149 kernel: Dynamic Preempt: voluntary Sep 11 00:27:47.891160 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:27:47.891172 kernel: rcu: RCU event tracing is enabled. Sep 11 00:27:47.891183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:27:47.891194 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:27:47.891205 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:27:47.891216 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:27:47.891226 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:27:47.891240 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:27:47.891252 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:27:47.891262 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:27:47.891278 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:27:47.891289 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 11 00:27:47.891299 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:27:47.891310 kernel: Console: colour dummy device 80x25 Sep 11 00:27:47.891320 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:27:47.891334 kernel: ACPI: Core revision 20240827 Sep 11 00:27:47.891344 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:27:47.891354 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:27:47.891364 kernel: x2apic enabled Sep 11 00:27:47.891375 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:27:47.891386 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 11 00:27:47.891396 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 11 00:27:47.891407 kernel: kvm-guest: setup PV IPIs Sep 11 00:27:47.891418 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:27:47.891433 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:27:47.891444 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 11 00:27:47.891455 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 11 00:27:47.891466 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 11 00:27:47.891477 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 11 00:27:47.891491 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:27:47.891502 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:27:47.891512 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:27:47.891523 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 11 00:27:47.891537 kernel: active return thunk: retbleed_return_thunk Sep 11 00:27:47.891547 kernel: RETBleed: Mitigation: untrained return thunk Sep 11 00:27:47.891558 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:27:47.891569 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:27:47.891581 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 11 00:27:47.891593 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 11 00:27:47.891604 kernel: active return thunk: srso_return_thunk Sep 11 00:27:47.891616 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 11 00:27:47.891630 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:27:47.891641 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:27:47.891652 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:27:47.891663 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:27:47.891673 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:27:47.891685 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:27:47.891695 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:27:47.891706 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:27:47.891717 kernel: landlock: Up and running. Sep 11 00:27:47.891731 kernel: SELinux: Initializing. Sep 11 00:27:47.891742 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:27:47.891752 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:27:47.891781 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 11 00:27:47.891792 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 11 00:27:47.891802 kernel: ... version: 0 Sep 11 00:27:47.891817 kernel: ... bit width: 48 Sep 11 00:27:47.891828 kernel: ... generic registers: 6 Sep 11 00:27:47.891839 kernel: ... value mask: 0000ffffffffffff Sep 11 00:27:47.891855 kernel: ... max period: 00007fffffffffff Sep 11 00:27:47.891866 kernel: ... fixed-purpose events: 0 Sep 11 00:27:47.891876 kernel: ... event mask: 000000000000003f Sep 11 00:27:47.891887 kernel: signal: max sigframe size: 1776 Sep 11 00:27:47.891897 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:27:47.891909 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:27:47.891929 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:27:47.891939 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:27:47.891950 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:27:47.891961 kernel: .... node #0, CPUs: #1 #2 #3 Sep 11 00:27:47.891975 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:27:47.891985 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 11 00:27:47.891997 kernel: Memory: 2411272K/2552216K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 135016K reserved, 0K cma-reserved) Sep 11 00:27:47.892008 kernel: devtmpfs: initialized Sep 11 00:27:47.892018 kernel: x86/mm: Memory block size: 128MB Sep 11 00:27:47.892029 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 11 00:27:47.892041 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 11 00:27:47.892051 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:27:47.892066 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:27:47.892076 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:27:47.892087 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:27:47.892098 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:27:47.892108 kernel: audit: type=2000 audit(1757550465.243:1): state=initialized audit_enabled=0 res=1 Sep 11 00:27:47.892119 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:27:47.892130 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:27:47.892140 kernel: cpuidle: using governor menu Sep 11 00:27:47.892151 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:27:47.892164 kernel: dca service started, version 1.12.1 Sep 11 00:27:47.892175 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 11 00:27:47.892186 kernel: PCI: Using configuration type 1 for base access Sep 11 00:27:47.892197 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:27:47.892208 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:27:47.892218 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:27:47.892229 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:27:47.892240 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:27:47.892251 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:27:47.892265 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:27:47.892276 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:27:47.892286 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:27:47.892296 kernel: ACPI: Interpreter enabled Sep 11 00:27:47.892307 kernel: ACPI: PM: (supports S0 S5) Sep 11 00:27:47.892317 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:27:47.892328 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:27:47.892338 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:27:47.892349 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 11 00:27:47.892362 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:27:47.892597 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:27:47.892752 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 11 00:27:47.892936 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 11 00:27:47.892952 kernel: PCI host bridge to bus 0000:00 Sep 11 00:27:47.893123 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:27:47.893262 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:27:47.893414 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:27:47.893557 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 11 00:27:47.893691 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 11 00:27:47.893874 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:27:47.894031 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:27:47.894226 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:27:47.894450 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:27:47.894603 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 11 00:27:47.894752 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 11 00:27:47.894948 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 11 00:27:47.895114 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:27:47.895285 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:27:47.895412 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 11 00:27:47.895542 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 11 00:27:47.895664 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 11 00:27:47.895837 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:27:47.896036 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 11 00:27:47.896199 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 11 00:27:47.896356 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 11 00:27:47.896537 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:27:47.896703 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 11 00:27:47.896882 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 11 00:27:47.897056 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 11 00:27:47.897225 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 11 00:27:47.897712 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:27:47.897931 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 11 00:27:47.898100 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 11 00:27:47.898239 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 11 00:27:47.898422 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 11 00:27:47.898607 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 11 00:27:47.898799 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 11 00:27:47.898817 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:27:47.898829 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:27:47.898840 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:27:47.898857 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:27:47.898869 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 11 00:27:47.898879 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 11 00:27:47.898890 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 11 00:27:47.898902 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 11 00:27:47.898924 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 11 00:27:47.898937 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 11 00:27:47.898950 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 11 00:27:47.898961 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 11 00:27:47.898976 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 11 00:27:47.898988 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 11 00:27:47.898999 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 11 00:27:47.899009 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 11 00:27:47.899020 kernel: iommu: Default domain type: Translated Sep 11 00:27:47.899031 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:27:47.899042 kernel: efivars: Registered efivars operations Sep 11 00:27:47.899052 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:27:47.899064 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:27:47.899078 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 11 00:27:47.899089 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 11 00:27:47.899099 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 11 00:27:47.899110 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 11 00:27:47.899121 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 11 00:27:47.899284 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 11 00:27:47.899452 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 11 00:27:47.899621 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:27:47.899643 kernel: vgaarb: loaded Sep 11 00:27:47.899656 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:27:47.899667 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:27:47.899677 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:27:47.899688 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:27:47.899699 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:27:47.899710 kernel: pnp: PnP ACPI init Sep 11 00:27:47.899964 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 11 00:27:47.899990 kernel: pnp: PnP ACPI: found 6 devices Sep 11 00:27:47.900002 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:27:47.900013 kernel: NET: Registered PF_INET protocol family Sep 11 00:27:47.900024 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:27:47.900035 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:27:47.900047 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:27:47.900058 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:27:47.900069 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:27:47.900080 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:27:47.900096 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:27:47.900108 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:27:47.900119 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:27:47.900130 kernel: NET: Registered PF_XDP protocol family Sep 11 00:27:47.900301 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 11 00:27:47.900464 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 11 00:27:47.900613 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:27:47.900788 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:27:47.900960 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:27:47.901109 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 11 00:27:47.901254 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 11 00:27:47.901399 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 11 00:27:47.901415 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:27:47.901426 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 11 00:27:47.901437 kernel: Initialise system trusted keyrings Sep 11 00:27:47.901448 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:27:47.901458 kernel: Key type asymmetric registered Sep 11 00:27:47.901474 kernel: Asymmetric key parser 'x509' registered Sep 11 00:27:47.901504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:27:47.901517 kernel: io scheduler mq-deadline registered Sep 11 00:27:47.901528 kernel: io scheduler kyber registered Sep 11 00:27:47.901540 kernel: io scheduler bfq registered Sep 11 00:27:47.901551 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:27:47.901563 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 11 00:27:47.901574 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 11 00:27:47.901585 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 11 00:27:47.901600 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:27:47.901611 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:27:47.901623 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:27:47.901634 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:27:47.901645 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:27:47.901656 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:27:47.901861 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 11 00:27:47.902037 kernel: rtc_cmos 00:04: registered as rtc0 Sep 11 00:27:47.902201 kernel: rtc_cmos 00:04: setting system clock to 2025-09-11T00:27:47 UTC (1757550467) Sep 11 00:27:47.902360 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:27:47.902378 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 11 00:27:47.902390 kernel: efifb: probing for efifb Sep 11 00:27:47.902407 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 11 00:27:47.902419 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 11 00:27:47.902430 kernel: efifb: scrolling: redraw Sep 11 00:27:47.902442 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 11 00:27:47.902453 kernel: Console: switching to colour frame buffer device 160x50 Sep 11 00:27:47.902469 kernel: fb0: EFI VGA frame buffer device Sep 11 00:27:47.902484 kernel: pstore: Using crash dump compression: deflate Sep 11 00:27:47.902495 kernel: pstore: Registered efi_pstore as persistent store backend Sep 11 00:27:47.902507 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:27:47.902518 kernel: Segment Routing with IPv6 Sep 11 00:27:47.902533 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:27:47.902545 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:27:47.902557 kernel: Key type dns_resolver registered Sep 11 00:27:47.902568 kernel: IPI shorthand broadcast: enabled Sep 11 00:27:47.902579 kernel: sched_clock: Marking stable (4343003034, 149583689)->(4510107420, -17520697) Sep 11 00:27:47.902590 kernel: registered taskstats version 1 Sep 11 00:27:47.902602 kernel: Loading compiled-in X.509 certificates Sep 11 00:27:47.902613 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:27:47.902625 kernel: Demotion targets for Node 0: null Sep 11 00:27:47.902640 kernel: Key type .fscrypt registered Sep 11 00:27:47.902652 kernel: Key type fscrypt-provisioning registered Sep 11 00:27:47.902663 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:27:47.902675 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:27:47.902686 kernel: ima: No architecture policies found Sep 11 00:27:47.902697 kernel: clk: Disabling unused clocks Sep 11 00:27:47.902709 kernel: Warning: unable to open an initial console. Sep 11 00:27:47.902721 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:27:47.902732 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:27:47.902748 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:27:47.902790 kernel: Run /init as init process Sep 11 00:27:47.902803 kernel: with arguments: Sep 11 00:27:47.902814 kernel: /init Sep 11 00:27:47.902825 kernel: with environment: Sep 11 00:27:47.902836 kernel: HOME=/ Sep 11 00:27:47.902848 kernel: TERM=linux Sep 11 00:27:47.902860 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:27:47.902878 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:27:47.902900 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:27:47.902922 systemd[1]: Detected virtualization kvm. Sep 11 00:27:47.902936 systemd[1]: Detected architecture x86-64. Sep 11 00:27:47.902947 systemd[1]: Running in initrd. Sep 11 00:27:47.902959 systemd[1]: No hostname configured, using default hostname. Sep 11 00:27:47.902972 systemd[1]: Hostname set to . Sep 11 00:27:47.902985 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:27:47.903001 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:27:47.903014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:27:47.903026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:27:47.903040 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:27:47.903052 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:27:47.903065 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:27:47.903079 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:27:47.903097 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:27:47.903110 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:27:47.903122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:27:47.903135 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:27:47.903147 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:27:47.903159 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:27:47.903172 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:27:47.903184 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:27:47.903204 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:27:47.903217 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:27:47.903229 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:27:47.903241 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:27:47.903254 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:27:47.903267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:27:47.903279 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:27:47.903291 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:27:47.903307 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:27:47.903320 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:27:47.903333 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:27:47.903346 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:27:47.903358 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:27:47.903370 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:27:47.903382 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:27:47.903395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:47.903407 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:27:47.903424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:27:47.903436 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:27:47.903449 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:27:47.903499 systemd-journald[220]: Collecting audit messages is disabled. Sep 11 00:27:47.903532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:47.903545 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:27:47.903558 systemd-journald[220]: Journal started Sep 11 00:27:47.903590 systemd-journald[220]: Runtime Journal (/run/log/journal/79006b3ded5b4e038995b0560f5ee4dd) is 6M, max 48.2M, 42.2M free. Sep 11 00:27:47.888541 systemd-modules-load[221]: Inserted module 'overlay' Sep 11 00:27:47.905817 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:27:47.908438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:27:47.914518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:27:47.921729 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:27:47.921773 kernel: Bridge firewalling registered Sep 11 00:27:47.919862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:27:47.923003 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 11 00:27:47.924747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:27:47.927930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:27:47.929729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:27:47.932392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:27:47.932559 systemd-tmpfiles[244]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:27:47.936857 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:27:47.941908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:27:47.951726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:47.955901 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:27:47.957899 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:27:48.005168 systemd-resolved[274]: Positive Trust Anchors: Sep 11 00:27:48.005192 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:27:48.005223 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:27:48.008172 systemd-resolved[274]: Defaulting to hostname 'linux'. Sep 11 00:27:48.010027 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:27:48.014126 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:27:48.067796 kernel: SCSI subsystem initialized Sep 11 00:27:48.076803 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:27:48.089791 kernel: iscsi: registered transport (tcp) Sep 11 00:27:48.111862 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:27:48.111887 kernel: QLogic iSCSI HBA Driver Sep 11 00:27:48.133564 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:27:48.150368 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:27:48.152580 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:27:48.204483 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:27:48.207118 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:27:48.265794 kernel: raid6: avx2x4 gen() 27493 MB/s Sep 11 00:27:48.282786 kernel: raid6: avx2x2 gen() 27200 MB/s Sep 11 00:27:48.299924 kernel: raid6: avx2x1 gen() 23436 MB/s Sep 11 00:27:48.299938 kernel: raid6: using algorithm avx2x4 gen() 27493 MB/s Sep 11 00:27:48.317883 kernel: raid6: .... xor() 6677 MB/s, rmw enabled Sep 11 00:27:48.317919 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:27:48.339792 kernel: xor: automatically using best checksumming function avx Sep 11 00:27:48.510814 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:27:48.520512 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:27:48.523379 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:27:48.558810 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 11 00:27:48.564552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:27:48.565893 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:27:48.587905 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 11 00:27:48.624113 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:27:48.626646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:27:48.716368 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:27:48.719711 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:27:48.761784 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 11 00:27:48.764608 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:27:48.778808 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:27:48.778838 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:27:48.778850 kernel: GPT:9289727 != 19775487 Sep 11 00:27:48.779844 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:27:48.779864 kernel: GPT:9289727 != 19775487 Sep 11 00:27:48.780803 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:27:48.780824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:27:48.794794 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:27:48.797781 kernel: libata version 3.00 loaded. Sep 11 00:27:48.801816 kernel: AES CTR mode by8 optimization enabled Sep 11 00:27:48.806280 kernel: ahci 0000:00:1f.2: version 3.0 Sep 11 00:27:48.806503 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 11 00:27:48.807365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:48.811333 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 11 00:27:48.812921 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 11 00:27:48.813083 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 11 00:27:48.807689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:48.814603 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:48.821096 kernel: scsi host0: ahci Sep 11 00:27:48.820074 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:48.823780 kernel: scsi host1: ahci Sep 11 00:27:48.825792 kernel: scsi host2: ahci Sep 11 00:27:48.826851 kernel: scsi host3: ahci Sep 11 00:27:48.829798 kernel: scsi host4: ahci Sep 11 00:27:48.839076 kernel: scsi host5: ahci Sep 11 00:27:48.842391 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 11 00:27:48.842415 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 11 00:27:48.842426 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 11 00:27:48.842823 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:27:48.844639 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 11 00:27:48.844658 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 11 00:27:48.844669 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 11 00:27:48.879570 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:27:48.893935 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:27:48.896769 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:27:48.913579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:27:48.914413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:48.918289 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:27:48.947997 disk-uuid[632]: Primary Header is updated. Sep 11 00:27:48.947997 disk-uuid[632]: Secondary Entries is updated. Sep 11 00:27:48.947997 disk-uuid[632]: Secondary Header is updated. Sep 11 00:27:48.951793 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:27:48.955799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:27:49.156804 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 11 00:27:49.156904 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 11 00:27:49.180803 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 11 00:27:49.180889 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 11 00:27:49.181796 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 11 00:27:49.182795 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 11 00:27:49.183792 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:27:49.184793 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 11 00:27:49.184811 kernel: ata3.00: applying bridge limits Sep 11 00:27:49.185976 kernel: ata3.00: LPM support broken, forcing max_power Sep 11 00:27:49.185991 kernel: ata3.00: configured for UDMA/100 Sep 11 00:27:49.188808 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 11 00:27:49.244801 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 11 00:27:49.245114 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:27:49.271791 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:27:49.668038 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:27:49.669292 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:27:49.670463 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:27:49.674101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:27:49.677450 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:27:49.715418 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:27:49.957455 disk-uuid[633]: The operation has completed successfully. Sep 11 00:27:49.958899 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:27:50.000167 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:27:50.000343 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:27:50.039701 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:27:50.068714 sh[661]: Success Sep 11 00:27:50.088832 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:27:50.088898 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:27:50.088916 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:27:50.100787 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 11 00:27:50.136004 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:27:50.139139 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:27:50.155018 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:27:50.162545 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (673) Sep 11 00:27:50.162569 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:27:50.162580 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:50.168782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:27:50.168807 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:27:50.169679 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:27:50.170611 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:27:50.171815 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:27:50.175482 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:27:50.177914 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:27:50.217785 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Sep 11 00:27:50.220607 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:50.220634 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:50.224288 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:27:50.224317 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:27:50.229789 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:50.229942 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:27:50.233962 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:27:50.316197 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:27:50.346846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:27:50.480662 systemd-networkd[842]: lo: Link UP Sep 11 00:27:50.480672 systemd-networkd[842]: lo: Gained carrier Sep 11 00:27:50.484560 systemd-networkd[842]: Enumeration completed Sep 11 00:27:50.485432 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:27:50.486961 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:50.486965 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:27:50.510734 systemd-networkd[842]: eth0: Link UP Sep 11 00:27:50.511012 systemd-networkd[842]: eth0: Gained carrier Sep 11 00:27:50.511033 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:50.511418 systemd[1]: Reached target network.target - Network. Sep 11 00:27:50.530838 systemd-networkd[842]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:27:50.531680 ignition[753]: Ignition 2.21.0 Sep 11 00:27:50.531688 ignition[753]: Stage: fetch-offline Sep 11 00:27:50.531731 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:50.531741 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:27:50.531882 ignition[753]: parsed url from cmdline: "" Sep 11 00:27:50.531886 ignition[753]: no config URL provided Sep 11 00:27:50.531893 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:27:50.531903 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:27:50.531931 ignition[753]: op(1): [started] loading QEMU firmware config module Sep 11 00:27:50.531936 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:27:50.542407 ignition[753]: op(1): [finished] loading QEMU firmware config module Sep 11 00:27:50.583250 ignition[753]: parsing config with SHA512: de4e46d11428357caaf9156b9369e8cfc677efa3f24b230bdbcce93b83f58137c131c688611ec89d94998ca9cf9f92991d4d695244a9d45f5eac0c220e908996 Sep 11 00:27:50.587051 unknown[753]: fetched base config from "system" Sep 11 00:27:50.587064 unknown[753]: fetched user config from "qemu" Sep 11 00:27:50.587402 ignition[753]: fetch-offline: fetch-offline passed Sep 11 00:27:50.587723 systemd-resolved[274]: Detected conflict on linux IN A 10.0.0.117 Sep 11 00:27:50.587465 ignition[753]: Ignition finished successfully Sep 11 00:27:50.587734 systemd-resolved[274]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Sep 11 00:27:50.590889 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:27:50.591779 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:27:50.592685 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:27:50.693379 ignition[856]: Ignition 2.21.0 Sep 11 00:27:50.693393 ignition[856]: Stage: kargs Sep 11 00:27:50.693532 ignition[856]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:50.693544 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:27:50.696540 ignition[856]: kargs: kargs passed Sep 11 00:27:50.696638 ignition[856]: Ignition finished successfully Sep 11 00:27:50.701018 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:27:50.702685 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:27:50.744212 ignition[864]: Ignition 2.21.0 Sep 11 00:27:50.744227 ignition[864]: Stage: disks Sep 11 00:27:50.744626 ignition[864]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:50.744641 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:27:50.748383 ignition[864]: disks: disks passed Sep 11 00:27:50.748443 ignition[864]: Ignition finished successfully Sep 11 00:27:50.752553 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:27:50.753226 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:27:50.755012 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:27:50.755322 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:27:50.755638 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:27:50.756119 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:27:50.763668 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:27:50.803538 systemd-fsck[874]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:27:50.811107 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:27:50.815246 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:27:50.941807 kernel: EXT4-fs (vda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:27:50.942968 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:27:50.944029 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:27:50.946615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:27:50.948792 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:27:50.950786 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:27:50.950854 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:27:50.950886 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:27:50.961303 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:27:50.965087 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:27:50.969708 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (882) Sep 11 00:27:50.969744 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:50.969755 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:50.973149 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:27:50.973172 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:27:50.975847 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:27:51.016752 initrd-setup-root[906]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:27:51.022892 initrd-setup-root[913]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:27:51.027661 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:27:51.032368 initrd-setup-root[927]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:27:51.133128 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:27:51.135648 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:27:51.136871 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:27:51.161704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:27:51.162853 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:51.180941 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:27:51.206486 ignition[995]: INFO : Ignition 2.21.0 Sep 11 00:27:51.206486 ignition[995]: INFO : Stage: mount Sep 11 00:27:51.208451 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:51.208451 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:27:51.208451 ignition[995]: INFO : mount: mount passed Sep 11 00:27:51.208451 ignition[995]: INFO : Ignition finished successfully Sep 11 00:27:51.211120 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:27:51.214088 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:27:51.357256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:27:51.389787 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1009) Sep 11 00:27:51.389820 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:27:51.391787 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:27:51.394861 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:27:51.394910 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:27:51.396847 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:27:51.432945 ignition[1026]: INFO : Ignition 2.21.0 Sep 11 00:27:51.432945 ignition[1026]: INFO : Stage: files Sep 11 00:27:51.434889 ignition[1026]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:51.434889 ignition[1026]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:27:51.438749 ignition[1026]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:27:51.441117 ignition[1026]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:27:51.441117 ignition[1026]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:27:51.445166 ignition[1026]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:27:51.446673 ignition[1026]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:27:51.446673 ignition[1026]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:27:51.446001 unknown[1026]: wrote ssh authorized keys file for user: core Sep 11 00:27:51.450788 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:27:51.450788 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 11 00:27:51.502102 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:27:51.809970 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:27:51.809970 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:27:51.814228 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:27:51.814228 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:27:51.814228 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:27:51.814228 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:27:51.814228 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:27:51.814228 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:27:51.814228 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:27:51.827003 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:27:51.827003 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:27:51.827003 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:51.827003 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:51.827003 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:51.827003 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 11 00:27:52.110036 systemd-networkd[842]: eth0: Gained IPv6LL Sep 11 00:27:52.265381 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:27:53.042555 ignition[1026]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:27:53.042555 ignition[1026]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:27:53.046951 ignition[1026]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:27:53.088935 ignition[1026]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:27:53.088935 ignition[1026]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:27:53.088935 ignition[1026]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 00:27:53.093856 ignition[1026]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:27:53.093856 ignition[1026]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:27:53.093856 ignition[1026]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 00:27:53.093856 ignition[1026]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:27:53.113439 ignition[1026]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:27:53.120568 ignition[1026]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:27:53.122179 ignition[1026]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:27:53.122179 ignition[1026]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:27:53.122179 ignition[1026]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:27:53.122179 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:27:53.122179 ignition[1026]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:27:53.122179 ignition[1026]: INFO : files: files passed Sep 11 00:27:53.122179 ignition[1026]: INFO : Ignition finished successfully Sep 11 00:27:53.128053 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:27:53.129968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:27:53.132856 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:27:53.162644 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:27:53.162838 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:27:53.167484 initrd-setup-root-after-ignition[1055]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:27:53.171735 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:27:53.173926 initrd-setup-root-after-ignition[1057]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:27:53.175462 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:27:53.178791 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:27:53.179291 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:27:53.182561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:27:53.274827 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:27:53.275086 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:27:53.276527 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:27:53.279458 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:27:53.280104 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:27:53.281846 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:27:53.315182 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:27:53.317077 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:27:53.344733 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:27:53.345160 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:27:53.347338 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:27:53.349469 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:27:53.349615 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:27:53.351572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:27:53.352075 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:27:53.352388 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:27:53.352706 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:27:53.353049 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:27:53.353362 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:27:53.353684 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:27:53.354180 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:27:53.354513 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:27:53.354860 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:27:53.355317 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:27:53.355607 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:27:53.355752 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:27:53.376025 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:27:53.376667 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:27:53.377113 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:27:53.380970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:27:53.381594 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:27:53.381740 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:27:53.386190 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:27:53.386328 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:27:53.386969 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:27:53.387258 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:27:53.394846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:27:53.395223 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:27:53.398069 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:27:53.398379 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:27:53.398494 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:27:53.401147 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:27:53.401254 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:27:53.402834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:27:53.402976 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:27:53.404570 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:27:53.404700 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:27:53.408119 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:27:53.409362 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:27:53.411342 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:27:53.411496 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:27:53.418936 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:27:53.419098 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:27:53.428621 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:27:53.429439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:27:53.440361 ignition[1081]: INFO : Ignition 2.21.0 Sep 11 00:27:53.440361 ignition[1081]: INFO : Stage: umount Sep 11 00:27:53.442356 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:27:53.442356 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:27:53.442728 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:27:53.445783 ignition[1081]: INFO : umount: umount passed Sep 11 00:27:53.446820 ignition[1081]: INFO : Ignition finished successfully Sep 11 00:27:53.449547 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:27:53.449691 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:27:53.450588 systemd[1]: Stopped target network.target - Network. Sep 11 00:27:53.453123 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:27:53.453183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:27:53.453456 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:27:53.453498 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:27:53.454298 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:27:53.454356 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:27:53.454607 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:27:53.454650 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:27:53.455204 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:27:53.462082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:27:53.473677 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:27:53.473866 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:27:53.478030 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:27:53.478322 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:27:53.478445 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:27:53.482375 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:27:53.483996 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:27:53.486274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:27:53.486338 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:27:53.489734 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:27:53.490182 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:27:53.490236 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:27:53.490572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:27:53.490630 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:53.495848 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:27:53.495901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:27:53.496520 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:27:53.496566 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:27:53.500796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:27:53.503394 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:27:53.503465 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:27:53.522655 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:27:53.522955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:27:53.524785 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:27:53.524973 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:27:53.526601 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:27:53.526682 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:27:53.528570 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:27:53.528621 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:27:53.531475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:27:53.531538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:27:53.532589 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:27:53.532641 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:27:53.536948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:27:53.537009 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:27:53.541054 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:27:53.546049 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:27:53.547201 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:27:53.550104 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:27:53.550201 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:27:53.554091 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 00:27:53.554200 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:27:53.557965 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:27:53.558036 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:27:53.558472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:53.558519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:53.564622 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:27:53.564698 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 11 00:27:53.564743 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:27:53.564818 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:27:53.602834 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:27:53.602983 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:27:53.627225 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:27:53.627381 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:27:53.628422 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:27:53.630132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:27:53.630191 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:27:53.633708 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:27:53.661668 systemd[1]: Switching root. Sep 11 00:27:53.715331 systemd-journald[220]: Journal stopped Sep 11 00:27:55.088972 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 11 00:27:55.089055 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:27:55.089076 kernel: SELinux: policy capability open_perms=1 Sep 11 00:27:55.089088 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:27:55.089107 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:27:55.089119 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:27:55.089130 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:27:55.089168 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:27:55.089180 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:27:55.089192 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:27:55.089204 kernel: audit: type=1403 audit(1757550474.156:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:27:55.089222 systemd[1]: Successfully loaded SELinux policy in 55.879ms. Sep 11 00:27:55.089245 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.524ms. Sep 11 00:27:55.089260 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:27:55.089273 systemd[1]: Detected virtualization kvm. Sep 11 00:27:55.089285 systemd[1]: Detected architecture x86-64. Sep 11 00:27:55.089305 systemd[1]: Detected first boot. Sep 11 00:27:55.089317 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:27:55.089330 zram_generator::config[1127]: No configuration found. Sep 11 00:27:55.089349 kernel: Guest personality initialized and is inactive Sep 11 00:27:55.089361 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:27:55.089372 kernel: Initialized host personality Sep 11 00:27:55.089383 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:27:55.089395 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:27:55.089416 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:27:55.089428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:27:55.089441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:27:55.089454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:27:55.089466 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:27:55.089484 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:27:55.089497 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:27:55.089510 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:27:55.089523 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:27:55.089543 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:27:55.089556 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:27:55.089568 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:27:55.089588 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:27:55.089601 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:27:55.089613 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:27:55.089626 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:27:55.089638 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:27:55.089661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:27:55.089673 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:27:55.089685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:27:55.089698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:27:55.089711 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:27:55.089723 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:27:55.089747 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:27:55.089782 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:27:55.089810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:27:55.089825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:27:55.089837 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:27:55.089849 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:27:55.089861 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:27:55.089873 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:27:55.089886 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:27:55.089898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:27:55.089911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:27:55.089923 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:27:55.089943 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:27:55.089956 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:27:55.089967 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:27:55.089979 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:27:55.089992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:55.090004 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:27:55.090017 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:27:55.090029 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:27:55.090049 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:27:55.090062 systemd[1]: Reached target machines.target - Containers. Sep 11 00:27:55.090074 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:27:55.090087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:55.090103 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:27:55.090115 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:27:55.090127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:27:55.090139 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:27:55.090151 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:27:55.090172 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:27:55.090185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:27:55.090198 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:27:55.090211 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:27:55.090224 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:27:55.090236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:27:55.090248 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:27:55.090261 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:55.090290 kernel: loop: module loaded Sep 11 00:27:55.090302 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:27:55.090314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:27:55.090326 kernel: fuse: init (API version 7.41) Sep 11 00:27:55.090339 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:27:55.090351 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:27:55.090384 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:27:55.090419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:27:55.090433 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:27:55.090445 systemd[1]: Stopped verity-setup.service. Sep 11 00:27:55.090458 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:55.090471 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:27:55.090492 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:27:55.090507 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:27:55.090519 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:27:55.090531 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:27:55.090543 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:27:55.090555 kernel: ACPI: bus type drm_connector registered Sep 11 00:27:55.090567 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:27:55.090586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:27:55.090599 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:27:55.090617 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:27:55.090629 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:27:55.090641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:27:55.090676 systemd-journald[1198]: Collecting audit messages is disabled. Sep 11 00:27:55.090702 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:27:55.090714 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:27:55.090746 systemd-journald[1198]: Journal started Sep 11 00:27:55.090802 systemd-journald[1198]: Runtime Journal (/run/log/journal/79006b3ded5b4e038995b0560f5ee4dd) is 6M, max 48.2M, 42.2M free. Sep 11 00:27:54.807369 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:27:54.832242 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:27:54.832790 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:27:55.094796 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:27:55.095803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:27:55.096083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:27:55.097711 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:27:55.098003 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:27:55.099560 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:27:55.099862 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:27:55.101284 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:27:55.102713 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:27:55.104280 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:27:55.105854 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:27:55.120403 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:27:55.123259 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:27:55.125640 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:27:55.126828 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:27:55.126976 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:27:55.129175 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:27:55.144879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:27:55.146016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:55.147239 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:27:55.149980 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:27:55.151474 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:27:55.153144 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:27:55.156940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:27:55.158371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:27:55.167082 systemd-journald[1198]: Time spent on flushing to /var/log/journal/79006b3ded5b4e038995b0560f5ee4dd is 35.249ms for 1042 entries. Sep 11 00:27:55.167082 systemd-journald[1198]: System Journal (/var/log/journal/79006b3ded5b4e038995b0560f5ee4dd) is 8M, max 195.6M, 187.6M free. Sep 11 00:27:55.210257 systemd-journald[1198]: Received client request to flush runtime journal. Sep 11 00:27:55.238506 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:27:55.242520 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:27:55.244977 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:27:55.249886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:27:55.251994 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:27:55.255964 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:27:55.257852 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:27:55.261319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:27:55.267826 kernel: loop0: detected capacity change from 0 to 146240 Sep 11 00:27:55.269161 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:27:55.271962 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:27:55.286714 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 11 00:27:55.287448 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 11 00:27:55.294804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:27:55.295183 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:27:55.300345 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:27:55.323074 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:27:55.333788 kernel: loop1: detected capacity change from 0 to 224512 Sep 11 00:27:55.349832 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:27:55.354419 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:27:55.357799 kernel: loop2: detected capacity change from 0 to 113872 Sep 11 00:27:55.381527 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 11 00:27:55.381548 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 11 00:27:55.387355 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:27:55.391822 kernel: loop3: detected capacity change from 0 to 146240 Sep 11 00:27:55.406803 kernel: loop4: detected capacity change from 0 to 224512 Sep 11 00:27:55.414795 kernel: loop5: detected capacity change from 0 to 113872 Sep 11 00:27:55.421145 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:27:55.421830 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 11 00:27:55.491061 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:27:55.491307 systemd[1]: Reloading... Sep 11 00:27:55.567857 zram_generator::config[1297]: No configuration found. Sep 11 00:27:55.752827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:55.765581 ldconfig[1241]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:27:55.835317 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:27:55.835979 systemd[1]: Reloading finished in 344 ms. Sep 11 00:27:55.915179 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:27:55.916944 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:27:55.936670 systemd[1]: Starting ensure-sysext.service... Sep 11 00:27:55.938862 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:27:55.948936 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:27:55.948954 systemd[1]: Reloading... Sep 11 00:27:55.970656 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:27:55.970700 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:27:55.971078 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:27:55.971407 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:27:55.972333 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:27:55.972627 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 11 00:27:55.972702 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 11 00:27:55.978994 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:27:55.979159 systemd-tmpfiles[1336]: Skipping /boot Sep 11 00:27:56.008807 zram_generator::config[1366]: No configuration found. Sep 11 00:27:56.010566 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:27:56.010579 systemd-tmpfiles[1336]: Skipping /boot Sep 11 00:27:56.104245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:27:56.190093 systemd[1]: Reloading finished in 240 ms. Sep 11 00:27:56.220334 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:27:56.256454 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:27:56.271578 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:27:56.274510 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:27:56.276449 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:27:56.284078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:27:56.287179 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:27:56.291032 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:27:56.296886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:56.297133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:56.301051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:27:56.306013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:27:56.309179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:27:56.310655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:56.310814 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:56.317645 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:27:56.319252 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:56.322142 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:27:56.332211 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:27:56.334577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:27:56.334903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:27:56.336753 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:27:56.336997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:27:56.338874 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:27:56.339129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:27:56.362181 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:27:56.368337 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:56.368600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:56.372108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:27:56.376731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:27:56.382950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:27:56.384168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:56.384280 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:56.384363 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:56.394542 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:27:56.397333 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:27:56.400070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:27:56.400329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:27:56.402185 augenrules[1440]: No rules Sep 11 00:27:56.402818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:27:56.403060 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:27:56.404284 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Sep 11 00:27:56.405253 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:27:56.405524 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:27:56.407403 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:27:56.407641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:27:56.409315 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:27:56.423427 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:56.425310 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:27:56.426629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:27:56.430105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:27:56.433330 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:27:56.435032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:27:56.441875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:27:56.443577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:27:56.443871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:27:56.444204 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:27:56.444300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:27:56.451684 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:27:56.469270 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:27:56.472582 systemd[1]: Finished ensure-sysext.service. Sep 11 00:27:56.474045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:27:56.474319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:27:56.487546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:27:56.487831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:27:56.491110 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:27:56.504885 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:27:56.512573 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:27:56.515039 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:27:56.516819 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:27:56.517120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:27:56.520141 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:27:56.523282 augenrules[1455]: /sbin/augenrules: No change Sep 11 00:27:56.544176 augenrules[1512]: No rules Sep 11 00:27:56.545827 systemd-resolved[1405]: Positive Trust Anchors: Sep 11 00:27:56.546091 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:27:56.546129 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:27:56.546167 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:27:56.546744 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:27:56.551499 systemd-resolved[1405]: Defaulting to hostname 'linux'. Sep 11 00:27:56.553480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:27:56.554966 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:27:56.618460 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:27:56.664270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:27:56.674728 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:27:56.688781 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:27:56.702788 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:27:56.726817 systemd-networkd[1487]: lo: Link UP Sep 11 00:27:56.726832 systemd-networkd[1487]: lo: Gained carrier Sep 11 00:27:56.729657 systemd-networkd[1487]: Enumeration completed Sep 11 00:27:56.729842 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:27:56.731359 systemd[1]: Reached target network.target - Network. Sep 11 00:27:56.732254 systemd-networkd[1487]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:56.732265 systemd-networkd[1487]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:27:56.733184 systemd-networkd[1487]: eth0: Link UP Sep 11 00:27:56.734325 systemd-networkd[1487]: eth0: Gained carrier Sep 11 00:27:56.734349 systemd-networkd[1487]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:27:56.735014 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:27:56.737853 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 11 00:27:56.739300 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 11 00:27:56.739479 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:27:56.741433 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:27:56.749776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:27:56.753829 systemd-networkd[1487]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:27:56.755401 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:27:56.758022 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:27:58.574976 systemd-timesyncd[1500]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:27:58.575039 systemd-timesyncd[1500]: Initial clock synchronization to Thu 2025-09-11 00:27:58.574843 UTC. Sep 11 00:27:58.575580 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:27:58.576361 systemd-resolved[1405]: Clock change detected. Flushing caches. Sep 11 00:27:58.577465 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:27:58.578924 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:27:58.580268 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:27:58.581735 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:27:58.581764 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:27:58.582824 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:27:58.584689 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:27:58.585030 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:27:58.586812 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:27:58.588273 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:27:58.590785 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:27:58.594887 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:27:58.600565 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:27:58.602018 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:27:58.603797 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:27:58.607717 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:27:58.609936 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:27:58.613745 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:27:58.615544 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:27:58.620100 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:27:58.621766 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:27:58.623786 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:27:58.623814 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:27:58.626797 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:27:58.629477 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:27:58.633881 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:27:58.638838 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:27:58.640969 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:27:58.642039 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:27:58.643875 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:27:58.645538 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:27:58.651079 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:27:58.656803 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:27:58.658024 jq[1554]: false Sep 11 00:27:58.658181 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:27:58.665427 extend-filesystems[1555]: Found /dev/vda6 Sep 11 00:27:58.673852 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:27:58.676638 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:27:58.677226 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:27:58.680961 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:27:58.684792 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing passwd entry cache Sep 11 00:27:58.685148 oslogin_cache_refresh[1556]: Refreshing passwd entry cache Sep 11 00:27:58.689686 extend-filesystems[1555]: Found /dev/vda9 Sep 11 00:27:58.687074 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:27:58.691252 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:27:58.694395 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:27:58.694680 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:27:58.700024 extend-filesystems[1555]: Checking size of /dev/vda9 Sep 11 00:27:58.702056 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:27:58.702416 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:27:58.704100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:27:58.704443 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:27:58.705246 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting users, quitting Sep 11 00:27:58.705241 oslogin_cache_refresh[1556]: Failure getting users, quitting Sep 11 00:27:58.705331 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:27:58.705269 oslogin_cache_refresh[1556]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:27:58.705388 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Refreshing group entry cache Sep 11 00:27:58.705338 oslogin_cache_refresh[1556]: Refreshing group entry cache Sep 11 00:27:58.714614 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Failure getting groups, quitting Sep 11 00:27:58.714614 google_oslogin_nss_cache[1556]: oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:27:58.714600 oslogin_cache_refresh[1556]: Failure getting groups, quitting Sep 11 00:27:58.714617 oslogin_cache_refresh[1556]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:27:58.718798 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:27:58.723933 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:27:58.730957 jq[1573]: true Sep 11 00:27:58.741262 update_engine[1570]: I20250911 00:27:58.741189 1570 main.cc:92] Flatcar Update Engine starting Sep 11 00:27:58.746003 dbus-daemon[1550]: [system] SELinux support is enabled Sep 11 00:27:58.749217 update_engine[1570]: I20250911 00:27:58.749091 1570 update_check_scheduler.cc:74] Next update check in 9m31s Sep 11 00:27:58.749792 extend-filesystems[1555]: Resized partition /dev/vda9 Sep 11 00:27:58.751915 extend-filesystems[1594]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:27:58.760689 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:27:58.768333 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:27:58.784439 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:27:58.786723 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:27:58.788543 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:27:58.790522 tar[1582]: linux-amd64/LICENSE Sep 11 00:27:58.788807 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:27:58.790904 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:27:58.791328 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:27:58.795004 jq[1597]: true Sep 11 00:27:58.799160 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:27:58.804787 tar[1582]: linux-amd64/helm Sep 11 00:27:58.917875 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:27:58.821012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:58.878251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:27:58.922572 extend-filesystems[1594]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:27:58.922572 extend-filesystems[1594]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:27:58.922572 extend-filesystems[1594]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:27:58.878571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:58.936370 extend-filesystems[1555]: Resized filesystem in /dev/vda9 Sep 11 00:27:58.909691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:27:58.921390 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:27:58.924656 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:27:58.950917 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:27:58.957881 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:27:58.959818 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:27:58.965747 kernel: kvm_amd: TSC scaling supported Sep 11 00:27:58.965795 kernel: kvm_amd: Nested Virtualization enabled Sep 11 00:27:58.965809 kernel: kvm_amd: Nested Paging enabled Sep 11 00:27:58.965822 kernel: kvm_amd: LBR virtualization supported Sep 11 00:27:58.971753 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 11 00:27:58.971807 kernel: kvm_amd: Virtual GIF supported Sep 11 00:27:59.051658 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:27:59.105850 locksmithd[1599]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:27:59.129718 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:27:59.133791 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:27:59.165087 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:27:59.165348 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:27:59.168212 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:27:59.191978 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:27:59.222854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:27:59.224805 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:27:59.227848 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:27:59.321740 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:27:59.323063 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:27:59.324720 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:27:59.324750 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:27:59.325408 systemd-logind[1565]: New seat seat0. Sep 11 00:27:59.327770 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:27:59.368924 containerd[1591]: time="2025-09-11T00:27:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:27:59.369856 containerd[1591]: time="2025-09-11T00:27:59.369802593Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:27:59.381605 containerd[1591]: time="2025-09-11T00:27:59.381538989Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.682µs" Sep 11 00:27:59.381605 containerd[1591]: time="2025-09-11T00:27:59.381600614Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:27:59.381682 containerd[1591]: time="2025-09-11T00:27:59.381625742Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:27:59.381915 containerd[1591]: time="2025-09-11T00:27:59.381889716Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:27:59.381945 containerd[1591]: time="2025-09-11T00:27:59.381914763Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:27:59.381965 containerd[1591]: time="2025-09-11T00:27:59.381946653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382053 containerd[1591]: time="2025-09-11T00:27:59.382027926Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382053 containerd[1591]: time="2025-09-11T00:27:59.382047392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382401 containerd[1591]: time="2025-09-11T00:27:59.382372251Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382401 containerd[1591]: time="2025-09-11T00:27:59.382394513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382450 containerd[1591]: time="2025-09-11T00:27:59.382406105Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382450 containerd[1591]: time="2025-09-11T00:27:59.382414691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382589 containerd[1591]: time="2025-09-11T00:27:59.382558821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382885 containerd[1591]: time="2025-09-11T00:27:59.382859755Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382930 containerd[1591]: time="2025-09-11T00:27:59.382903688Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:27:59.382930 containerd[1591]: time="2025-09-11T00:27:59.382914318Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:27:59.382970 containerd[1591]: time="2025-09-11T00:27:59.382956577Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:27:59.383289 containerd[1591]: time="2025-09-11T00:27:59.383254155Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:27:59.383396 containerd[1591]: time="2025-09-11T00:27:59.383374701Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:27:59.389214 containerd[1591]: time="2025-09-11T00:27:59.389185616Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:27:59.389266 containerd[1591]: time="2025-09-11T00:27:59.389248514Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:27:59.389289 containerd[1591]: time="2025-09-11T00:27:59.389264574Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:27:59.389289 containerd[1591]: time="2025-09-11T00:27:59.389277919Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:27:59.389335 containerd[1591]: time="2025-09-11T00:27:59.389290262Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:27:59.389335 containerd[1591]: time="2025-09-11T00:27:59.389300291Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:27:59.389335 containerd[1591]: time="2025-09-11T00:27:59.389312374Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:27:59.389335 containerd[1591]: time="2025-09-11T00:27:59.389326621Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:27:59.389335 containerd[1591]: time="2025-09-11T00:27:59.389337120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:27:59.389443 containerd[1591]: time="2025-09-11T00:27:59.389348121Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:27:59.389443 containerd[1591]: time="2025-09-11T00:27:59.389391192Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:27:59.389443 containerd[1591]: time="2025-09-11T00:27:59.389403795Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389531405Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389556872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389570017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389593221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389603360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389613178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389624058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389636351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389649286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389686105Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389697696Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389768048Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389783878Z" level=info msg="Start snapshots syncer" Sep 11 00:27:59.390092 containerd[1591]: time="2025-09-11T00:27:59.389817721Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:27:59.390497 containerd[1591]: time="2025-09-11T00:27:59.390059835Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:27:59.390497 containerd[1591]: time="2025-09-11T00:27:59.390108086Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390207542Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390317809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390335853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390345211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390354418Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390367773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390377331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390386718Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390414100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390425261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390434879Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390465366Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390479092Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:27:59.390705 containerd[1591]: time="2025-09-11T00:27:59.390487577Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390496254Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390504128Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390513967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390526320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390545125Z" level=info msg="runtime interface created" Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390550996Z" level=info msg="created NRI interface" Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390583758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390597664Z" level=info msg="Connect containerd service" Sep 11 00:27:59.390979 containerd[1591]: time="2025-09-11T00:27:59.390618643Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:27:59.391639 containerd[1591]: time="2025-09-11T00:27:59.391601636Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:27:59.679766 containerd[1591]: time="2025-09-11T00:27:59.679524171Z" level=info msg="Start subscribing containerd event" Sep 11 00:27:59.679766 containerd[1591]: time="2025-09-11T00:27:59.679658984Z" level=info msg="Start recovering state" Sep 11 00:27:59.680053 containerd[1591]: time="2025-09-11T00:27:59.680009331Z" level=info msg="Start event monitor" Sep 11 00:27:59.680083 containerd[1591]: time="2025-09-11T00:27:59.680066678Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:27:59.680128 containerd[1591]: time="2025-09-11T00:27:59.680073902Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:27:59.680155 containerd[1591]: time="2025-09-11T00:27:59.680078691Z" level=info msg="Start streaming server" Sep 11 00:27:59.680202 containerd[1591]: time="2025-09-11T00:27:59.680172988Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:27:59.680202 containerd[1591]: time="2025-09-11T00:27:59.680200409Z" level=info msg="runtime interface starting up..." Sep 11 00:27:59.680244 containerd[1591]: time="2025-09-11T00:27:59.680207602Z" level=info msg="starting plugins..." Sep 11 00:27:59.680244 containerd[1591]: time="2025-09-11T00:27:59.680230876Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:27:59.680452 containerd[1591]: time="2025-09-11T00:27:59.680192494Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:27:59.680769 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:27:59.681169 containerd[1591]: time="2025-09-11T00:27:59.681140061Z" level=info msg="containerd successfully booted in 0.312782s" Sep 11 00:27:59.847213 tar[1582]: linux-amd64/README.md Sep 11 00:27:59.881239 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:28:00.069988 systemd-networkd[1487]: eth0: Gained IPv6LL Sep 11 00:28:00.074401 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:28:00.076598 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:28:00.079885 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:28:00.082815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:00.085265 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:28:00.117796 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:28:00.118200 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:28:00.120234 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:28:00.122927 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:28:01.508228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:01.510969 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:28:01.514766 systemd[1]: Startup finished in 4.445s (kernel) + 6.478s (initrd) + 5.596s (userspace) = 16.520s. Sep 11 00:28:01.519261 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:28:02.307099 kubelet[1697]: E0911 00:28:02.306827 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:28:02.312321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:28:02.312534 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:28:02.313004 systemd[1]: kubelet.service: Consumed 2.020s CPU time, 264M memory peak. Sep 11 00:28:03.152008 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:28:03.153592 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:57056.service - OpenSSH per-connection server daemon (10.0.0.1:57056). Sep 11 00:28:03.228979 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 57056 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:28:03.231128 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:03.238075 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:28:03.239170 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:28:03.246194 systemd-logind[1565]: New session 1 of user core. Sep 11 00:28:03.275345 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:28:03.278412 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:28:03.298417 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:28:03.301030 systemd-logind[1565]: New session c1 of user core. Sep 11 00:28:03.458934 systemd[1715]: Queued start job for default target default.target. Sep 11 00:28:03.480902 systemd[1715]: Created slice app.slice - User Application Slice. Sep 11 00:28:03.480927 systemd[1715]: Reached target paths.target - Paths. Sep 11 00:28:03.480965 systemd[1715]: Reached target timers.target - Timers. Sep 11 00:28:03.482491 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:28:03.494338 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:28:03.494503 systemd[1715]: Reached target sockets.target - Sockets. Sep 11 00:28:03.494554 systemd[1715]: Reached target basic.target - Basic System. Sep 11 00:28:03.494595 systemd[1715]: Reached target default.target - Main User Target. Sep 11 00:28:03.494627 systemd[1715]: Startup finished in 186ms. Sep 11 00:28:03.494985 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:28:03.496849 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:28:03.564030 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:57058.service - OpenSSH per-connection server daemon (10.0.0.1:57058). Sep 11 00:28:03.618068 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 57058 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:28:03.619343 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:03.623816 systemd-logind[1565]: New session 2 of user core. Sep 11 00:28:03.633800 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:28:03.685269 sshd[1728]: Connection closed by 10.0.0.1 port 57058 Sep 11 00:28:03.685549 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:03.704251 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:57058.service: Deactivated successfully. Sep 11 00:28:03.706097 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:28:03.706790 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:28:03.709592 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:57070.service - OpenSSH per-connection server daemon (10.0.0.1:57070). Sep 11 00:28:03.710399 systemd-logind[1565]: Removed session 2. Sep 11 00:28:03.769641 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 57070 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:28:03.770938 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:03.775245 systemd-logind[1565]: New session 3 of user core. Sep 11 00:28:03.784778 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:28:03.833632 sshd[1736]: Connection closed by 10.0.0.1 port 57070 Sep 11 00:28:03.833924 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:03.849029 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:57070.service: Deactivated successfully. Sep 11 00:28:03.851172 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:28:03.851977 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:28:03.854992 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:57080.service - OpenSSH per-connection server daemon (10.0.0.1:57080). Sep 11 00:28:03.855621 systemd-logind[1565]: Removed session 3. Sep 11 00:28:03.906006 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 57080 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:28:03.907357 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:03.911975 systemd-logind[1565]: New session 4 of user core. Sep 11 00:28:03.922845 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:28:03.977442 sshd[1744]: Connection closed by 10.0.0.1 port 57080 Sep 11 00:28:03.977859 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:03.993351 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:57080.service: Deactivated successfully. Sep 11 00:28:03.995537 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:28:03.996306 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:28:03.999557 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:57090.service - OpenSSH per-connection server daemon (10.0.0.1:57090). Sep 11 00:28:04.000347 systemd-logind[1565]: Removed session 4. Sep 11 00:28:04.057402 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 57090 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:28:04.059116 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:04.064180 systemd-logind[1565]: New session 5 of user core. Sep 11 00:28:04.073782 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:28:04.133402 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:28:04.133764 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:04.153832 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 11 00:28:04.155531 sshd[1752]: Connection closed by 10.0.0.1 port 57090 Sep 11 00:28:04.155946 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:04.168867 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:57090.service: Deactivated successfully. Sep 11 00:28:04.170877 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:28:04.171657 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:28:04.174850 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:57102.service - OpenSSH per-connection server daemon (10.0.0.1:57102). Sep 11 00:28:04.175393 systemd-logind[1565]: Removed session 5. Sep 11 00:28:04.230328 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 57102 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:28:04.232148 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:04.236929 systemd-logind[1565]: New session 6 of user core. Sep 11 00:28:04.245828 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:28:04.300482 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:28:04.300907 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:04.467342 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 11 00:28:04.473728 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:28:04.474029 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:04.483953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:28:04.535987 augenrules[1785]: No rules Sep 11 00:28:04.537690 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:28:04.538000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:28:04.539127 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 11 00:28:04.540681 sshd[1761]: Connection closed by 10.0.0.1 port 57102 Sep 11 00:28:04.540980 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:04.554382 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:57102.service: Deactivated successfully. Sep 11 00:28:04.556162 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:28:04.557007 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:28:04.559942 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:57108.service - OpenSSH per-connection server daemon (10.0.0.1:57108). Sep 11 00:28:04.560576 systemd-logind[1565]: Removed session 6. Sep 11 00:28:04.612564 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 57108 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:28:04.614036 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:28:04.618221 systemd-logind[1565]: New session 7 of user core. Sep 11 00:28:04.627787 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:28:04.680230 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:28:04.680542 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:28:05.602387 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:28:05.616010 (dockerd)[1817]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:28:06.242401 dockerd[1817]: time="2025-09-11T00:28:06.242298201Z" level=info msg="Starting up" Sep 11 00:28:06.243415 dockerd[1817]: time="2025-09-11T00:28:06.243347288Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:28:06.932547 dockerd[1817]: time="2025-09-11T00:28:06.932488347Z" level=info msg="Loading containers: start." Sep 11 00:28:06.944760 kernel: Initializing XFRM netlink socket Sep 11 00:28:07.238856 systemd-networkd[1487]: docker0: Link UP Sep 11 00:28:07.244881 dockerd[1817]: time="2025-09-11T00:28:07.244814428Z" level=info msg="Loading containers: done." Sep 11 00:28:07.264684 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3643274895-merged.mount: Deactivated successfully. Sep 11 00:28:07.266295 dockerd[1817]: time="2025-09-11T00:28:07.266242786Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:28:07.266369 dockerd[1817]: time="2025-09-11T00:28:07.266342563Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:28:07.266502 dockerd[1817]: time="2025-09-11T00:28:07.266477026Z" level=info msg="Initializing buildkit" Sep 11 00:28:07.299673 dockerd[1817]: time="2025-09-11T00:28:07.299613707Z" level=info msg="Completed buildkit initialization" Sep 11 00:28:07.304059 dockerd[1817]: time="2025-09-11T00:28:07.304026551Z" level=info msg="Daemon has completed initialization" Sep 11 00:28:07.304228 dockerd[1817]: time="2025-09-11T00:28:07.304122721Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:28:07.304299 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:28:08.324868 containerd[1591]: time="2025-09-11T00:28:08.324806343Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 11 00:28:09.154054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1344711429.mount: Deactivated successfully. Sep 11 00:28:10.533755 containerd[1591]: time="2025-09-11T00:28:10.533692794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:10.534537 containerd[1591]: time="2025-09-11T00:28:10.534459983Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 11 00:28:10.535900 containerd[1591]: time="2025-09-11T00:28:10.535852434Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:10.538319 containerd[1591]: time="2025-09-11T00:28:10.538278993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:10.539339 containerd[1591]: time="2025-09-11T00:28:10.539307462Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.214404888s" Sep 11 00:28:10.539423 containerd[1591]: time="2025-09-11T00:28:10.539343640Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 11 00:28:10.540070 containerd[1591]: time="2025-09-11T00:28:10.539994931Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 11 00:28:12.066138 containerd[1591]: time="2025-09-11T00:28:12.066067186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:12.066790 containerd[1591]: time="2025-09-11T00:28:12.066747391Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 11 00:28:12.068017 containerd[1591]: time="2025-09-11T00:28:12.067962860Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:12.070472 containerd[1591]: time="2025-09-11T00:28:12.070439193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:12.071340 containerd[1591]: time="2025-09-11T00:28:12.071292624Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.531218875s" Sep 11 00:28:12.071340 containerd[1591]: time="2025-09-11T00:28:12.071327880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 11 00:28:12.071821 containerd[1591]: time="2025-09-11T00:28:12.071798863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 11 00:28:12.562818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:28:12.564722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:12.858167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:12.863838 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:28:13.329092 kubelet[2097]: E0911 00:28:13.328896 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:28:13.335898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:28:13.336124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:28:13.336570 systemd[1]: kubelet.service: Consumed 311ms CPU time, 110.7M memory peak. Sep 11 00:28:14.266650 containerd[1591]: time="2025-09-11T00:28:14.266555050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:14.267735 containerd[1591]: time="2025-09-11T00:28:14.267367002Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 11 00:28:14.268674 containerd[1591]: time="2025-09-11T00:28:14.268615413Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:14.271698 containerd[1591]: time="2025-09-11T00:28:14.271619887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:14.272625 containerd[1591]: time="2025-09-11T00:28:14.272580758Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.20075223s" Sep 11 00:28:14.272625 containerd[1591]: time="2025-09-11T00:28:14.272618539Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 11 00:28:14.273211 containerd[1591]: time="2025-09-11T00:28:14.273160605Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 11 00:28:15.222099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734152116.mount: Deactivated successfully. Sep 11 00:28:15.578913 containerd[1591]: time="2025-09-11T00:28:15.578733384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:15.579938 containerd[1591]: time="2025-09-11T00:28:15.579866338Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 11 00:28:15.581384 containerd[1591]: time="2025-09-11T00:28:15.581338909Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:15.583237 containerd[1591]: time="2025-09-11T00:28:15.583197444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:15.583826 containerd[1591]: time="2025-09-11T00:28:15.583762153Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.31057109s" Sep 11 00:28:15.583826 containerd[1591]: time="2025-09-11T00:28:15.583822857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 11 00:28:15.584373 containerd[1591]: time="2025-09-11T00:28:15.584336069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:28:16.043894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189231361.mount: Deactivated successfully. Sep 11 00:28:17.728452 containerd[1591]: time="2025-09-11T00:28:17.728361105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:17.729390 containerd[1591]: time="2025-09-11T00:28:17.729242879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:28:17.730489 containerd[1591]: time="2025-09-11T00:28:17.730433672Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:17.733170 containerd[1591]: time="2025-09-11T00:28:17.733087007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:17.734570 containerd[1591]: time="2025-09-11T00:28:17.734535993Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.150168275s" Sep 11 00:28:17.734642 containerd[1591]: time="2025-09-11T00:28:17.734571239Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:28:17.735181 containerd[1591]: time="2025-09-11T00:28:17.735093609Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:28:18.233980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013251919.mount: Deactivated successfully. Sep 11 00:28:18.239456 containerd[1591]: time="2025-09-11T00:28:18.239417787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:28:18.242682 containerd[1591]: time="2025-09-11T00:28:18.240464629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:28:18.242682 containerd[1591]: time="2025-09-11T00:28:18.242112008Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:28:18.244308 containerd[1591]: time="2025-09-11T00:28:18.244256760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:28:18.245045 containerd[1591]: time="2025-09-11T00:28:18.245007898Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 509.885205ms" Sep 11 00:28:18.245045 containerd[1591]: time="2025-09-11T00:28:18.245041040Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:28:18.245658 containerd[1591]: time="2025-09-11T00:28:18.245624945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 11 00:28:18.934876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506273508.mount: Deactivated successfully. Sep 11 00:28:20.903733 containerd[1591]: time="2025-09-11T00:28:20.903633420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:20.904625 containerd[1591]: time="2025-09-11T00:28:20.904552093Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 11 00:28:20.905858 containerd[1591]: time="2025-09-11T00:28:20.905788020Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:20.908938 containerd[1591]: time="2025-09-11T00:28:20.908894555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:20.910159 containerd[1591]: time="2025-09-11T00:28:20.910101969Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.664442699s" Sep 11 00:28:20.910159 containerd[1591]: time="2025-09-11T00:28:20.910154928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 11 00:28:23.239913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:23.240091 systemd[1]: kubelet.service: Consumed 311ms CPU time, 110.7M memory peak. Sep 11 00:28:23.242384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:23.266483 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Sep 11 00:28:23.266500 systemd[1]: Reloading... Sep 11 00:28:23.404709 zram_generator::config[2332]: No configuration found. Sep 11 00:28:23.654525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:28:23.773296 systemd[1]: Reloading finished in 506 ms. Sep 11 00:28:23.838331 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:28:23.838435 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:28:23.838764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:23.838829 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.3M memory peak. Sep 11 00:28:23.840483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:24.025338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:24.045018 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:28:24.138805 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:24.138805 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:28:24.138805 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:24.139274 kubelet[2347]: I0911 00:28:24.138856 2347 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:28:24.425759 kubelet[2347]: I0911 00:28:24.425635 2347 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:28:24.425759 kubelet[2347]: I0911 00:28:24.425678 2347 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:28:24.426288 kubelet[2347]: I0911 00:28:24.426233 2347 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:28:24.451929 kubelet[2347]: E0911 00:28:24.451882 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:24.453026 kubelet[2347]: I0911 00:28:24.452998 2347 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:28:24.462687 kubelet[2347]: I0911 00:28:24.460626 2347 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:28:24.467298 kubelet[2347]: I0911 00:28:24.467266 2347 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:28:24.468768 kubelet[2347]: I0911 00:28:24.468717 2347 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:28:24.468939 kubelet[2347]: I0911 00:28:24.468756 2347 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:28:24.469056 kubelet[2347]: I0911 00:28:24.468956 2347 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:28:24.469056 kubelet[2347]: I0911 00:28:24.468968 2347 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:28:24.469154 kubelet[2347]: I0911 00:28:24.469138 2347 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:24.482177 kubelet[2347]: I0911 00:28:24.482135 2347 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:28:24.482222 kubelet[2347]: I0911 00:28:24.482212 2347 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:28:24.482283 kubelet[2347]: I0911 00:28:24.482258 2347 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:28:24.482283 kubelet[2347]: I0911 00:28:24.482281 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:28:24.489413 kubelet[2347]: I0911 00:28:24.488579 2347 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:28:24.489413 kubelet[2347]: I0911 00:28:24.489208 2347 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:28:24.490081 kubelet[2347]: W0911 00:28:24.489634 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:24.490081 kubelet[2347]: E0911 00:28:24.489728 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:24.490972 kubelet[2347]: W0911 00:28:24.490215 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:28:24.490972 kubelet[2347]: W0911 00:28:24.490228 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:24.490972 kubelet[2347]: E0911 00:28:24.490264 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:24.492698 kubelet[2347]: I0911 00:28:24.492650 2347 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:28:24.492698 kubelet[2347]: I0911 00:28:24.492715 2347 server.go:1287] "Started kubelet" Sep 11 00:28:24.493009 kubelet[2347]: I0911 00:28:24.492943 2347 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:28:24.496387 kubelet[2347]: I0911 00:28:24.496367 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:28:24.496562 kubelet[2347]: I0911 00:28:24.496533 2347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:28:24.497420 kubelet[2347]: I0911 00:28:24.497393 2347 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:28:24.497526 kubelet[2347]: E0911 00:28:24.497503 2347 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:28:24.497810 kubelet[2347]: I0911 00:28:24.497786 2347 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:28:24.497862 kubelet[2347]: I0911 00:28:24.497848 2347 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:28:24.499516 kubelet[2347]: I0911 00:28:24.499426 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:28:24.500113 kubelet[2347]: I0911 00:28:24.499486 2347 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:28:24.501043 kubelet[2347]: E0911 00:28:24.500998 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Sep 11 00:28:24.503124 kubelet[2347]: I0911 00:28:24.503094 2347 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:28:24.506384 kubelet[2347]: W0911 00:28:24.506113 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:24.506384 kubelet[2347]: E0911 00:28:24.506157 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:24.506656 kubelet[2347]: E0911 00:28:24.504194 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186412df37184f4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:28:24.492683087 +0000 UTC m=+0.443775600,LastTimestamp:2025-09-11 00:28:24.492683087 +0000 UTC m=+0.443775600,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:28:24.507196 kubelet[2347]: I0911 00:28:24.507175 2347 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:28:24.507196 kubelet[2347]: I0911 00:28:24.507192 2347 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:28:24.507306 kubelet[2347]: I0911 00:28:24.507284 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:28:24.507427 kubelet[2347]: E0911 00:28:24.507410 2347 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:28:24.519173 kubelet[2347]: I0911 00:28:24.519152 2347 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:28:24.519173 kubelet[2347]: I0911 00:28:24.519166 2347 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:28:24.519296 kubelet[2347]: I0911 00:28:24.519184 2347 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:24.520550 kubelet[2347]: I0911 00:28:24.520495 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:28:24.523165 kubelet[2347]: I0911 00:28:24.523145 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:28:24.523234 kubelet[2347]: I0911 00:28:24.523174 2347 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:28:24.523234 kubelet[2347]: I0911 00:28:24.523201 2347 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:28:24.523234 kubelet[2347]: I0911 00:28:24.523211 2347 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:28:24.523310 kubelet[2347]: E0911 00:28:24.523262 2347 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:28:24.526179 kubelet[2347]: W0911 00:28:24.526098 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:24.526179 kubelet[2347]: E0911 00:28:24.526150 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:24.598003 kubelet[2347]: E0911 00:28:24.597934 2347 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:28:24.624276 kubelet[2347]: E0911 00:28:24.624216 2347 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:28:24.698575 kubelet[2347]: E0911 00:28:24.698448 2347 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:28:24.702190 kubelet[2347]: E0911 00:28:24.702133 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Sep 11 00:28:24.799408 kubelet[2347]: E0911 00:28:24.799360 2347 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:28:24.824588 kubelet[2347]: E0911 00:28:24.824539 2347 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:28:24.899864 kubelet[2347]: E0911 00:28:24.899814 2347 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:28:24.936835 kubelet[2347]: I0911 00:28:24.936760 2347 policy_none.go:49] "None policy: Start" Sep 11 00:28:24.936835 kubelet[2347]: I0911 00:28:24.936807 2347 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:28:24.936835 kubelet[2347]: I0911 00:28:24.936821 2347 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:28:24.944523 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:28:24.968950 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:28:24.972514 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:28:24.993872 kubelet[2347]: I0911 00:28:24.993727 2347 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:28:24.994044 kubelet[2347]: I0911 00:28:24.994003 2347 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:28:24.994146 kubelet[2347]: I0911 00:28:24.994034 2347 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:28:24.994398 kubelet[2347]: I0911 00:28:24.994362 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:28:24.995324 kubelet[2347]: E0911 00:28:24.995299 2347 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:28:24.995421 kubelet[2347]: E0911 00:28:24.995350 2347 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:28:25.096737 kubelet[2347]: I0911 00:28:25.096640 2347 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:28:25.097294 kubelet[2347]: E0911 00:28:25.097255 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 11 00:28:25.103014 kubelet[2347]: E0911 00:28:25.102969 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Sep 11 00:28:25.237051 systemd[1]: Created slice kubepods-burstable-podf2c828fd9112fcad6ef657ad5c824fc3.slice - libcontainer container kubepods-burstable-podf2c828fd9112fcad6ef657ad5c824fc3.slice. Sep 11 00:28:25.247712 kubelet[2347]: E0911 00:28:25.247678 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:28:25.250456 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 11 00:28:25.266437 kubelet[2347]: E0911 00:28:25.266386 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:28:25.270013 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 11 00:28:25.272706 kubelet[2347]: E0911 00:28:25.272650 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:28:25.299153 kubelet[2347]: I0911 00:28:25.299105 2347 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:28:25.299707 kubelet[2347]: E0911 00:28:25.299636 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 11 00:28:25.301895 kubelet[2347]: I0911 00:28:25.301821 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c828fd9112fcad6ef657ad5c824fc3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c828fd9112fcad6ef657ad5c824fc3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:25.301895 kubelet[2347]: I0911 00:28:25.301866 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c828fd9112fcad6ef657ad5c824fc3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f2c828fd9112fcad6ef657ad5c824fc3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:25.302008 kubelet[2347]: I0911 00:28:25.301895 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:25.302008 kubelet[2347]: I0911 00:28:25.301923 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:25.302008 kubelet[2347]: I0911 00:28:25.301950 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:25.302008 kubelet[2347]: I0911 00:28:25.301969 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:25.302008 kubelet[2347]: I0911 00:28:25.301999 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:25.302130 kubelet[2347]: I0911 00:28:25.302018 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c828fd9112fcad6ef657ad5c824fc3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c828fd9112fcad6ef657ad5c824fc3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:25.302130 kubelet[2347]: I0911 00:28:25.302037 2347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:25.549095 containerd[1591]: time="2025-09-11T00:28:25.548960407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f2c828fd9112fcad6ef657ad5c824fc3,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:25.567626 containerd[1591]: time="2025-09-11T00:28:25.567579929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:25.574327 containerd[1591]: time="2025-09-11T00:28:25.574293497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:25.701682 kubelet[2347]: I0911 00:28:25.701622 2347 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:28:25.702074 kubelet[2347]: E0911 00:28:25.702034 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 11 00:28:25.904605 kubelet[2347]: E0911 00:28:25.904446 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Sep 11 00:28:25.909050 kubelet[2347]: W0911 00:28:25.908980 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:25.909050 kubelet[2347]: E0911 00:28:25.909039 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:25.911467 kubelet[2347]: W0911 00:28:25.911418 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:25.911505 kubelet[2347]: E0911 00:28:25.911460 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:26.002026 kubelet[2347]: W0911 00:28:26.001967 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:26.002026 kubelet[2347]: E0911 00:28:26.002021 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:26.045128 kubelet[2347]: W0911 00:28:26.045041 2347 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 11 00:28:26.045128 kubelet[2347]: E0911 00:28:26.045127 2347 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:26.064694 containerd[1591]: time="2025-09-11T00:28:26.064153374Z" level=info msg="connecting to shim 7afdecc5b41d8d540e548570323e53f12a81565ff9e2ed6642f6ce7581b6764b" address="unix:///run/containerd/s/1dfe517a0bb50696521cd51ab3d1b81c30f95d232b08f661f58821c0a00d4f5d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:26.064694 containerd[1591]: time="2025-09-11T00:28:26.064291393Z" level=info msg="connecting to shim 4c0569750c10ef0d80b5714a46fa7d5334738293d0c32c64441145dc1fd7a33d" address="unix:///run/containerd/s/01cf25215fe68ed7ef97bbb29116aaded2e5360b9913b196d1d5e766d28e1eb2" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:26.166681 containerd[1591]: time="2025-09-11T00:28:26.166517667Z" level=info msg="connecting to shim 12d36db1d9345ab0b3468ade9b19cdb197a63c3e1298b65b9f670a0fcea321f0" address="unix:///run/containerd/s/f59fddc38f477e3a431efa2444bbf43c6666d6655475e9938f6af81d038cb1ff" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:26.188838 systemd[1]: Started cri-containerd-4c0569750c10ef0d80b5714a46fa7d5334738293d0c32c64441145dc1fd7a33d.scope - libcontainer container 4c0569750c10ef0d80b5714a46fa7d5334738293d0c32c64441145dc1fd7a33d. Sep 11 00:28:26.194241 systemd[1]: Started cri-containerd-7afdecc5b41d8d540e548570323e53f12a81565ff9e2ed6642f6ce7581b6764b.scope - libcontainer container 7afdecc5b41d8d540e548570323e53f12a81565ff9e2ed6642f6ce7581b6764b. Sep 11 00:28:26.213805 systemd[1]: Started cri-containerd-12d36db1d9345ab0b3468ade9b19cdb197a63c3e1298b65b9f670a0fcea321f0.scope - libcontainer container 12d36db1d9345ab0b3468ade9b19cdb197a63c3e1298b65b9f670a0fcea321f0. Sep 11 00:28:26.292030 containerd[1591]: time="2025-09-11T00:28:26.291951024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c0569750c10ef0d80b5714a46fa7d5334738293d0c32c64441145dc1fd7a33d\"" Sep 11 00:28:26.295018 containerd[1591]: time="2025-09-11T00:28:26.294855711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f2c828fd9112fcad6ef657ad5c824fc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7afdecc5b41d8d540e548570323e53f12a81565ff9e2ed6642f6ce7581b6764b\"" Sep 11 00:28:26.297945 containerd[1591]: time="2025-09-11T00:28:26.297854563Z" level=info msg="CreateContainer within sandbox \"4c0569750c10ef0d80b5714a46fa7d5334738293d0c32c64441145dc1fd7a33d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:28:26.299293 containerd[1591]: time="2025-09-11T00:28:26.299076765Z" level=info msg="CreateContainer within sandbox \"7afdecc5b41d8d540e548570323e53f12a81565ff9e2ed6642f6ce7581b6764b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:28:26.300152 containerd[1591]: time="2025-09-11T00:28:26.300046123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"12d36db1d9345ab0b3468ade9b19cdb197a63c3e1298b65b9f670a0fcea321f0\"" Sep 11 00:28:26.302477 containerd[1591]: time="2025-09-11T00:28:26.302382493Z" level=info msg="CreateContainer within sandbox \"12d36db1d9345ab0b3468ade9b19cdb197a63c3e1298b65b9f670a0fcea321f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:28:26.312483 containerd[1591]: time="2025-09-11T00:28:26.312456041Z" level=info msg="Container f2efa13c5c235b93e401a0f1f36ad540656a3435e120c3448d2d24751ead1770: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:26.315087 containerd[1591]: time="2025-09-11T00:28:26.315046398Z" level=info msg="Container 4be71ede94d40570c4b3874d14fe9a10096d508275ab6168d91f090890012e46: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:26.320575 containerd[1591]: time="2025-09-11T00:28:26.320526824Z" level=info msg="Container ffcd4c0ee8e267dd8f87403f8502cc4cef10e070e45cdf2946203679f5d44891: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:26.323547 containerd[1591]: time="2025-09-11T00:28:26.323503625Z" level=info msg="CreateContainer within sandbox \"4c0569750c10ef0d80b5714a46fa7d5334738293d0c32c64441145dc1fd7a33d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f2efa13c5c235b93e401a0f1f36ad540656a3435e120c3448d2d24751ead1770\"" Sep 11 00:28:26.324227 containerd[1591]: time="2025-09-11T00:28:26.324173051Z" level=info msg="StartContainer for \"f2efa13c5c235b93e401a0f1f36ad540656a3435e120c3448d2d24751ead1770\"" Sep 11 00:28:26.325847 containerd[1591]: time="2025-09-11T00:28:26.325819047Z" level=info msg="connecting to shim f2efa13c5c235b93e401a0f1f36ad540656a3435e120c3448d2d24751ead1770" address="unix:///run/containerd/s/01cf25215fe68ed7ef97bbb29116aaded2e5360b9913b196d1d5e766d28e1eb2" protocol=ttrpc version=3 Sep 11 00:28:26.329041 containerd[1591]: time="2025-09-11T00:28:26.329004149Z" level=info msg="CreateContainer within sandbox \"7afdecc5b41d8d540e548570323e53f12a81565ff9e2ed6642f6ce7581b6764b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4be71ede94d40570c4b3874d14fe9a10096d508275ab6168d91f090890012e46\"" Sep 11 00:28:26.329393 containerd[1591]: time="2025-09-11T00:28:26.329362751Z" level=info msg="StartContainer for \"4be71ede94d40570c4b3874d14fe9a10096d508275ab6168d91f090890012e46\"" Sep 11 00:28:26.330413 containerd[1591]: time="2025-09-11T00:28:26.330378065Z" level=info msg="connecting to shim 4be71ede94d40570c4b3874d14fe9a10096d508275ab6168d91f090890012e46" address="unix:///run/containerd/s/1dfe517a0bb50696521cd51ab3d1b81c30f95d232b08f661f58821c0a00d4f5d" protocol=ttrpc version=3 Sep 11 00:28:26.332518 containerd[1591]: time="2025-09-11T00:28:26.332455170Z" level=info msg="CreateContainer within sandbox \"12d36db1d9345ab0b3468ade9b19cdb197a63c3e1298b65b9f670a0fcea321f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ffcd4c0ee8e267dd8f87403f8502cc4cef10e070e45cdf2946203679f5d44891\"" Sep 11 00:28:26.332993 containerd[1591]: time="2025-09-11T00:28:26.332962180Z" level=info msg="StartContainer for \"ffcd4c0ee8e267dd8f87403f8502cc4cef10e070e45cdf2946203679f5d44891\"" Sep 11 00:28:26.334180 containerd[1591]: time="2025-09-11T00:28:26.334130992Z" level=info msg="connecting to shim ffcd4c0ee8e267dd8f87403f8502cc4cef10e070e45cdf2946203679f5d44891" address="unix:///run/containerd/s/f59fddc38f477e3a431efa2444bbf43c6666d6655475e9938f6af81d038cb1ff" protocol=ttrpc version=3 Sep 11 00:28:26.364840 systemd[1]: Started cri-containerd-4be71ede94d40570c4b3874d14fe9a10096d508275ab6168d91f090890012e46.scope - libcontainer container 4be71ede94d40570c4b3874d14fe9a10096d508275ab6168d91f090890012e46. Sep 11 00:28:26.366328 systemd[1]: Started cri-containerd-f2efa13c5c235b93e401a0f1f36ad540656a3435e120c3448d2d24751ead1770.scope - libcontainer container f2efa13c5c235b93e401a0f1f36ad540656a3435e120c3448d2d24751ead1770. Sep 11 00:28:26.371337 systemd[1]: Started cri-containerd-ffcd4c0ee8e267dd8f87403f8502cc4cef10e070e45cdf2946203679f5d44891.scope - libcontainer container ffcd4c0ee8e267dd8f87403f8502cc4cef10e070e45cdf2946203679f5d44891. Sep 11 00:28:26.437909 containerd[1591]: time="2025-09-11T00:28:26.437238588Z" level=info msg="StartContainer for \"f2efa13c5c235b93e401a0f1f36ad540656a3435e120c3448d2d24751ead1770\" returns successfully" Sep 11 00:28:26.437909 containerd[1591]: time="2025-09-11T00:28:26.437731803Z" level=info msg="StartContainer for \"4be71ede94d40570c4b3874d14fe9a10096d508275ab6168d91f090890012e46\" returns successfully" Sep 11 00:28:26.491579 kubelet[2347]: E0911 00:28:26.491535 2347 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:28:26.505538 kubelet[2347]: I0911 00:28:26.505004 2347 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:28:26.505538 kubelet[2347]: E0911 00:28:26.505384 2347 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 11 00:28:26.541842 kubelet[2347]: E0911 00:28:26.541756 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:28:26.544683 kubelet[2347]: E0911 00:28:26.543879 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:28:26.553130 containerd[1591]: time="2025-09-11T00:28:26.553101742Z" level=info msg="StartContainer for \"ffcd4c0ee8e267dd8f87403f8502cc4cef10e070e45cdf2946203679f5d44891\" returns successfully" Sep 11 00:28:27.550573 kubelet[2347]: E0911 00:28:27.550515 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:28:27.551181 kubelet[2347]: E0911 00:28:27.551156 2347 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:28:27.872287 kubelet[2347]: E0911 00:28:27.872150 2347 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:28:28.107132 kubelet[2347]: I0911 00:28:28.107081 2347 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:28:28.117798 kubelet[2347]: I0911 00:28:28.117739 2347 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:28:28.117798 kubelet[2347]: E0911 00:28:28.117806 2347 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:28:28.198505 kubelet[2347]: I0911 00:28:28.198365 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:28.203435 kubelet[2347]: E0911 00:28:28.203402 2347 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:28.203435 kubelet[2347]: I0911 00:28:28.203436 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:28.204851 kubelet[2347]: E0911 00:28:28.204829 2347 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:28.204851 kubelet[2347]: I0911 00:28:28.204845 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:28.206081 kubelet[2347]: E0911 00:28:28.206048 2347 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:28.486074 kubelet[2347]: I0911 00:28:28.485928 2347 apiserver.go:52] "Watching apiserver" Sep 11 00:28:28.498347 kubelet[2347]: I0911 00:28:28.498303 2347 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:28:28.549346 kubelet[2347]: I0911 00:28:28.549312 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:28.551353 kubelet[2347]: E0911 00:28:28.551307 2347 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:29.556687 kubelet[2347]: I0911 00:28:29.556628 2347 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:30.137331 systemd[1]: Reload requested from client PID 2623 ('systemctl') (unit session-7.scope)... Sep 11 00:28:30.137346 systemd[1]: Reloading... Sep 11 00:28:30.225713 zram_generator::config[2669]: No configuration found. Sep 11 00:28:30.355521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:28:30.504950 systemd[1]: Reloading finished in 367 ms. Sep 11 00:28:30.534593 kubelet[2347]: I0911 00:28:30.534525 2347 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:28:30.534835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:30.561142 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:28:30.561442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:30.561492 systemd[1]: kubelet.service: Consumed 977ms CPU time, 132.8M memory peak. Sep 11 00:28:30.564601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:28:30.796748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:28:30.809238 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:28:30.852264 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:30.852264 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:28:30.852264 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:28:30.852706 kubelet[2711]: I0911 00:28:30.852308 2711 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:28:30.861094 kubelet[2711]: I0911 00:28:30.861047 2711 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:28:30.861094 kubelet[2711]: I0911 00:28:30.861073 2711 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:28:30.861305 kubelet[2711]: I0911 00:28:30.861288 2711 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:28:30.862540 kubelet[2711]: I0911 00:28:30.862510 2711 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:28:30.864797 kubelet[2711]: I0911 00:28:30.864736 2711 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:28:30.869073 kubelet[2711]: I0911 00:28:30.869043 2711 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:28:30.874622 kubelet[2711]: I0911 00:28:30.874429 2711 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:28:30.875031 kubelet[2711]: I0911 00:28:30.874970 2711 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:28:30.875257 kubelet[2711]: I0911 00:28:30.875019 2711 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:28:30.875353 kubelet[2711]: I0911 00:28:30.875270 2711 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:28:30.875353 kubelet[2711]: I0911 00:28:30.875285 2711 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:28:30.875399 kubelet[2711]: I0911 00:28:30.875359 2711 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:30.875615 kubelet[2711]: I0911 00:28:30.875582 2711 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:28:30.875655 kubelet[2711]: I0911 00:28:30.875618 2711 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:28:30.875655 kubelet[2711]: I0911 00:28:30.875649 2711 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:28:30.875720 kubelet[2711]: I0911 00:28:30.875705 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:28:30.878972 kubelet[2711]: I0911 00:28:30.876853 2711 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:28:30.878972 kubelet[2711]: I0911 00:28:30.877393 2711 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:28:30.878972 kubelet[2711]: I0911 00:28:30.878016 2711 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:28:30.878972 kubelet[2711]: I0911 00:28:30.878057 2711 server.go:1287] "Started kubelet" Sep 11 00:28:30.878972 kubelet[2711]: I0911 00:28:30.878263 2711 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:28:30.878972 kubelet[2711]: I0911 00:28:30.878368 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:28:30.878972 kubelet[2711]: I0911 00:28:30.878723 2711 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:28:30.881468 kubelet[2711]: I0911 00:28:30.881418 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:28:30.882932 kubelet[2711]: I0911 00:28:30.882908 2711 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:28:30.889702 kubelet[2711]: I0911 00:28:30.889478 2711 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:28:30.889702 kubelet[2711]: I0911 00:28:30.889690 2711 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:28:30.890338 kubelet[2711]: E0911 00:28:30.890307 2711 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:28:30.890505 kubelet[2711]: I0911 00:28:30.890484 2711 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:28:30.890505 kubelet[2711]: I0911 00:28:30.890505 2711 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:28:30.894918 kubelet[2711]: E0911 00:28:30.894881 2711 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:28:30.895131 kubelet[2711]: I0911 00:28:30.895082 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:28:30.895365 kubelet[2711]: I0911 00:28:30.895334 2711 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:28:30.895477 kubelet[2711]: I0911 00:28:30.895447 2711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:28:30.896609 kubelet[2711]: I0911 00:28:30.896592 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:28:30.896776 kubelet[2711]: I0911 00:28:30.896765 2711 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:28:30.896877 kubelet[2711]: I0911 00:28:30.896861 2711 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:28:30.896935 kubelet[2711]: I0911 00:28:30.896925 2711 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:28:30.897035 kubelet[2711]: E0911 00:28:30.897018 2711 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:28:30.899482 kubelet[2711]: I0911 00:28:30.899454 2711 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:28:30.935245 kubelet[2711]: I0911 00:28:30.935214 2711 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:28:30.935245 kubelet[2711]: I0911 00:28:30.935232 2711 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:28:30.935245 kubelet[2711]: I0911 00:28:30.935252 2711 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:28:30.935444 kubelet[2711]: I0911 00:28:30.935436 2711 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:28:30.935469 kubelet[2711]: I0911 00:28:30.935446 2711 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:28:30.935469 kubelet[2711]: I0911 00:28:30.935468 2711 policy_none.go:49] "None policy: Start" Sep 11 00:28:30.935510 kubelet[2711]: I0911 00:28:30.935480 2711 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:28:30.935510 kubelet[2711]: I0911 00:28:30.935492 2711 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:28:30.935600 kubelet[2711]: I0911 00:28:30.935587 2711 state_mem.go:75] "Updated machine memory state" Sep 11 00:28:30.939816 kubelet[2711]: I0911 00:28:30.939795 2711 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:28:30.940300 kubelet[2711]: I0911 00:28:30.940251 2711 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:28:30.940422 kubelet[2711]: I0911 00:28:30.940267 2711 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:28:30.940706 kubelet[2711]: I0911 00:28:30.940617 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:28:30.941375 kubelet[2711]: E0911 00:28:30.941356 2711 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:28:30.998905 kubelet[2711]: I0911 00:28:30.998858 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:30.999394 kubelet[2711]: I0911 00:28:30.999355 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:30.999394 kubelet[2711]: I0911 00:28:30.999377 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:31.005976 kubelet[2711]: E0911 00:28:31.005939 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:31.042479 kubelet[2711]: I0911 00:28:31.042459 2711 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:28:31.051649 kubelet[2711]: I0911 00:28:31.051528 2711 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 00:28:31.051649 kubelet[2711]: I0911 00:28:31.051632 2711 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:28:31.192342 kubelet[2711]: I0911 00:28:31.192290 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:31.192342 kubelet[2711]: I0911 00:28:31.192330 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2c828fd9112fcad6ef657ad5c824fc3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c828fd9112fcad6ef657ad5c824fc3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:31.192342 kubelet[2711]: I0911 00:28:31.192353 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2c828fd9112fcad6ef657ad5c824fc3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f2c828fd9112fcad6ef657ad5c824fc3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:31.192342 kubelet[2711]: I0911 00:28:31.192373 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:31.192648 kubelet[2711]: I0911 00:28:31.192389 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:31.192648 kubelet[2711]: I0911 00:28:31.192450 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2c828fd9112fcad6ef657ad5c824fc3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f2c828fd9112fcad6ef657ad5c824fc3\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:31.192648 kubelet[2711]: I0911 00:28:31.192486 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:31.192648 kubelet[2711]: I0911 00:28:31.192511 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:31.192648 kubelet[2711]: I0911 00:28:31.192527 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:31.876649 kubelet[2711]: I0911 00:28:31.876604 2711 apiserver.go:52] "Watching apiserver" Sep 11 00:28:31.891259 kubelet[2711]: I0911 00:28:31.891223 2711 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:28:31.916570 kubelet[2711]: I0911 00:28:31.915917 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:31.916570 kubelet[2711]: I0911 00:28:31.916040 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:31.916570 kubelet[2711]: I0911 00:28:31.916316 2711 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:31.941023 kubelet[2711]: E0911 00:28:31.940902 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:28:31.942580 kubelet[2711]: E0911 00:28:31.941920 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:28:31.943693 kubelet[2711]: E0911 00:28:31.942807 2711 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:28:31.976616 kubelet[2711]: I0911 00:28:31.976545 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.976518946 podStartE2EDuration="976.518946ms" podCreationTimestamp="2025-09-11 00:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:31.968286351 +0000 UTC m=+1.154637245" watchObservedRunningTime="2025-09-11 00:28:31.976518946 +0000 UTC m=+1.162869840" Sep 11 00:28:31.977354 kubelet[2711]: I0911 00:28:31.977287 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.977276947 podStartE2EDuration="977.276947ms" podCreationTimestamp="2025-09-11 00:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:31.976819581 +0000 UTC m=+1.163170475" watchObservedRunningTime="2025-09-11 00:28:31.977276947 +0000 UTC m=+1.163627841" Sep 11 00:28:32.009584 kubelet[2711]: I0911 00:28:32.009501 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.009477241 podStartE2EDuration="3.009477241s" podCreationTimestamp="2025-09-11 00:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:31.99244281 +0000 UTC m=+1.178793714" watchObservedRunningTime="2025-09-11 00:28:32.009477241 +0000 UTC m=+1.195828135" Sep 11 00:28:36.481909 kubelet[2711]: I0911 00:28:36.481873 2711 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:28:36.482433 containerd[1591]: time="2025-09-11T00:28:36.482296219Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:28:36.482704 kubelet[2711]: I0911 00:28:36.482491 2711 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:28:37.414425 systemd[1]: Created slice kubepods-besteffort-pod3e671b04_7d75_4e65_a713_fad826df1d45.slice - libcontainer container kubepods-besteffort-pod3e671b04_7d75_4e65_a713_fad826df1d45.slice. Sep 11 00:28:37.429189 kubelet[2711]: I0911 00:28:37.429119 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e671b04-7d75-4e65-a713-fad826df1d45-xtables-lock\") pod \"kube-proxy-fdx8j\" (UID: \"3e671b04-7d75-4e65-a713-fad826df1d45\") " pod="kube-system/kube-proxy-fdx8j" Sep 11 00:28:37.429189 kubelet[2711]: I0911 00:28:37.429154 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e671b04-7d75-4e65-a713-fad826df1d45-lib-modules\") pod \"kube-proxy-fdx8j\" (UID: \"3e671b04-7d75-4e65-a713-fad826df1d45\") " pod="kube-system/kube-proxy-fdx8j" Sep 11 00:28:37.429189 kubelet[2711]: I0911 00:28:37.429172 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7c99\" (UniqueName: \"kubernetes.io/projected/3e671b04-7d75-4e65-a713-fad826df1d45-kube-api-access-m7c99\") pod \"kube-proxy-fdx8j\" (UID: \"3e671b04-7d75-4e65-a713-fad826df1d45\") " pod="kube-system/kube-proxy-fdx8j" Sep 11 00:28:37.429189 kubelet[2711]: I0911 00:28:37.429189 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e671b04-7d75-4e65-a713-fad826df1d45-kube-proxy\") pod \"kube-proxy-fdx8j\" (UID: \"3e671b04-7d75-4e65-a713-fad826df1d45\") " pod="kube-system/kube-proxy-fdx8j" Sep 11 00:28:37.535045 systemd[1]: Created slice kubepods-besteffort-pod6c6a39bc_1a4c_4304_8cd2_93c2d6f706d4.slice - libcontainer container kubepods-besteffort-pod6c6a39bc_1a4c_4304_8cd2_93c2d6f706d4.slice. Sep 11 00:28:37.630057 kubelet[2711]: I0911 00:28:37.630011 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c6a39bc-1a4c-4304-8cd2-93c2d6f706d4-var-lib-calico\") pod \"tigera-operator-755d956888-7sckw\" (UID: \"6c6a39bc-1a4c-4304-8cd2-93c2d6f706d4\") " pod="tigera-operator/tigera-operator-755d956888-7sckw" Sep 11 00:28:37.630057 kubelet[2711]: I0911 00:28:37.630046 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpltq\" (UniqueName: \"kubernetes.io/projected/6c6a39bc-1a4c-4304-8cd2-93c2d6f706d4-kube-api-access-rpltq\") pod \"tigera-operator-755d956888-7sckw\" (UID: \"6c6a39bc-1a4c-4304-8cd2-93c2d6f706d4\") " pod="tigera-operator/tigera-operator-755d956888-7sckw" Sep 11 00:28:37.724439 containerd[1591]: time="2025-09-11T00:28:37.724330674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdx8j,Uid:3e671b04-7d75-4e65-a713-fad826df1d45,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:37.750261 containerd[1591]: time="2025-09-11T00:28:37.749384008Z" level=info msg="connecting to shim e86ee6e3733071187c93eec7bcc242a31fde15559b6321389985de9ecd578f3d" address="unix:///run/containerd/s/62fb0dc704d59724ef5e7c86cc3b737505274e22bef8b8ab0f212d0a23989df2" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:37.782863 systemd[1]: Started cri-containerd-e86ee6e3733071187c93eec7bcc242a31fde15559b6321389985de9ecd578f3d.scope - libcontainer container e86ee6e3733071187c93eec7bcc242a31fde15559b6321389985de9ecd578f3d. Sep 11 00:28:37.811460 containerd[1591]: time="2025-09-11T00:28:37.811422402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdx8j,Uid:3e671b04-7d75-4e65-a713-fad826df1d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"e86ee6e3733071187c93eec7bcc242a31fde15559b6321389985de9ecd578f3d\"" Sep 11 00:28:37.814446 containerd[1591]: time="2025-09-11T00:28:37.814402959Z" level=info msg="CreateContainer within sandbox \"e86ee6e3733071187c93eec7bcc242a31fde15559b6321389985de9ecd578f3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:28:37.826039 containerd[1591]: time="2025-09-11T00:28:37.825997073Z" level=info msg="Container 995866f85eac5dc87ca7b69d8ee76554fbaf0b9d0ee5f2f83563cab24e357c72: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:37.835500 containerd[1591]: time="2025-09-11T00:28:37.835449896Z" level=info msg="CreateContainer within sandbox \"e86ee6e3733071187c93eec7bcc242a31fde15559b6321389985de9ecd578f3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"995866f85eac5dc87ca7b69d8ee76554fbaf0b9d0ee5f2f83563cab24e357c72\"" Sep 11 00:28:37.836375 containerd[1591]: time="2025-09-11T00:28:37.836257050Z" level=info msg="StartContainer for \"995866f85eac5dc87ca7b69d8ee76554fbaf0b9d0ee5f2f83563cab24e357c72\"" Sep 11 00:28:37.839055 containerd[1591]: time="2025-09-11T00:28:37.839029571Z" level=info msg="connecting to shim 995866f85eac5dc87ca7b69d8ee76554fbaf0b9d0ee5f2f83563cab24e357c72" address="unix:///run/containerd/s/62fb0dc704d59724ef5e7c86cc3b737505274e22bef8b8ab0f212d0a23989df2" protocol=ttrpc version=3 Sep 11 00:28:37.839680 containerd[1591]: time="2025-09-11T00:28:37.839615486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7sckw,Uid:6c6a39bc-1a4c-4304-8cd2-93c2d6f706d4,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:28:37.860974 systemd[1]: Started cri-containerd-995866f85eac5dc87ca7b69d8ee76554fbaf0b9d0ee5f2f83563cab24e357c72.scope - libcontainer container 995866f85eac5dc87ca7b69d8ee76554fbaf0b9d0ee5f2f83563cab24e357c72. Sep 11 00:28:37.864926 containerd[1591]: time="2025-09-11T00:28:37.864810920Z" level=info msg="connecting to shim 5c6f8743f5dec4211bcc41f9a2bfe9cd70aa7d2a0bd2e63ead49cbd34433e5b5" address="unix:///run/containerd/s/701d171d98e87b5a679ef307cf2b390134f383739d3f8622f22564dcd4e50d16" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:37.892819 systemd[1]: Started cri-containerd-5c6f8743f5dec4211bcc41f9a2bfe9cd70aa7d2a0bd2e63ead49cbd34433e5b5.scope - libcontainer container 5c6f8743f5dec4211bcc41f9a2bfe9cd70aa7d2a0bd2e63ead49cbd34433e5b5. Sep 11 00:28:37.918828 containerd[1591]: time="2025-09-11T00:28:37.918763339Z" level=info msg="StartContainer for \"995866f85eac5dc87ca7b69d8ee76554fbaf0b9d0ee5f2f83563cab24e357c72\" returns successfully" Sep 11 00:28:37.934427 kubelet[2711]: I0911 00:28:37.934329 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fdx8j" podStartSLOduration=0.934310309 podStartE2EDuration="934.310309ms" podCreationTimestamp="2025-09-11 00:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:28:37.934136909 +0000 UTC m=+7.120487803" watchObservedRunningTime="2025-09-11 00:28:37.934310309 +0000 UTC m=+7.120661203" Sep 11 00:28:37.954069 containerd[1591]: time="2025-09-11T00:28:37.953952385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-7sckw,Uid:6c6a39bc-1a4c-4304-8cd2-93c2d6f706d4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5c6f8743f5dec4211bcc41f9a2bfe9cd70aa7d2a0bd2e63ead49cbd34433e5b5\"" Sep 11 00:28:37.956738 containerd[1591]: time="2025-09-11T00:28:37.956684229Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:28:39.154729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679083928.mount: Deactivated successfully. Sep 11 00:28:39.723001 containerd[1591]: time="2025-09-11T00:28:39.722943661Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:39.723875 containerd[1591]: time="2025-09-11T00:28:39.723844922Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:28:39.725093 containerd[1591]: time="2025-09-11T00:28:39.725041904Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:39.726929 containerd[1591]: time="2025-09-11T00:28:39.726895062Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:39.727588 containerd[1591]: time="2025-09-11T00:28:39.727540858Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.770801494s" Sep 11 00:28:39.727620 containerd[1591]: time="2025-09-11T00:28:39.727587115Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:28:39.730026 containerd[1591]: time="2025-09-11T00:28:39.729527769Z" level=info msg="CreateContainer within sandbox \"5c6f8743f5dec4211bcc41f9a2bfe9cd70aa7d2a0bd2e63ead49cbd34433e5b5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:28:39.737774 containerd[1591]: time="2025-09-11T00:28:39.737730208Z" level=info msg="Container 7a57ff71df8121fb83d2ded6264d246207a2b62edd355e433bdba246a64819a4: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:39.744135 containerd[1591]: time="2025-09-11T00:28:39.744093306Z" level=info msg="CreateContainer within sandbox \"5c6f8743f5dec4211bcc41f9a2bfe9cd70aa7d2a0bd2e63ead49cbd34433e5b5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7a57ff71df8121fb83d2ded6264d246207a2b62edd355e433bdba246a64819a4\"" Sep 11 00:28:39.744492 containerd[1591]: time="2025-09-11T00:28:39.744461265Z" level=info msg="StartContainer for \"7a57ff71df8121fb83d2ded6264d246207a2b62edd355e433bdba246a64819a4\"" Sep 11 00:28:39.745258 containerd[1591]: time="2025-09-11T00:28:39.745220625Z" level=info msg="connecting to shim 7a57ff71df8121fb83d2ded6264d246207a2b62edd355e433bdba246a64819a4" address="unix:///run/containerd/s/701d171d98e87b5a679ef307cf2b390134f383739d3f8622f22564dcd4e50d16" protocol=ttrpc version=3 Sep 11 00:28:39.801805 systemd[1]: Started cri-containerd-7a57ff71df8121fb83d2ded6264d246207a2b62edd355e433bdba246a64819a4.scope - libcontainer container 7a57ff71df8121fb83d2ded6264d246207a2b62edd355e433bdba246a64819a4. Sep 11 00:28:39.837637 containerd[1591]: time="2025-09-11T00:28:39.837574143Z" level=info msg="StartContainer for \"7a57ff71df8121fb83d2ded6264d246207a2b62edd355e433bdba246a64819a4\" returns successfully" Sep 11 00:28:39.940698 kubelet[2711]: I0911 00:28:39.940574 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-7sckw" podStartSLOduration=1.167943652 podStartE2EDuration="2.940554363s" podCreationTimestamp="2025-09-11 00:28:37 +0000 UTC" firstStartedPulling="2025-09-11 00:28:37.955748139 +0000 UTC m=+7.142099033" lastFinishedPulling="2025-09-11 00:28:39.72835886 +0000 UTC m=+8.914709744" observedRunningTime="2025-09-11 00:28:39.940202455 +0000 UTC m=+9.126553349" watchObservedRunningTime="2025-09-11 00:28:39.940554363 +0000 UTC m=+9.126905247" Sep 11 00:28:43.998523 update_engine[1570]: I20250911 00:28:43.998410 1570 update_attempter.cc:509] Updating boot flags... Sep 11 00:28:45.133475 sudo[1797]: pam_unix(sudo:session): session closed for user root Sep 11 00:28:45.136250 sshd[1796]: Connection closed by 10.0.0.1 port 57108 Sep 11 00:28:45.136151 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Sep 11 00:28:45.148063 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:57108.service: Deactivated successfully. Sep 11 00:28:45.150417 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:28:45.150629 systemd[1]: session-7.scope: Consumed 5.702s CPU time, 224.5M memory peak. Sep 11 00:28:45.153852 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:28:45.157882 systemd-logind[1565]: Removed session 7. Sep 11 00:28:47.579482 systemd[1]: Created slice kubepods-besteffort-poddd97914c_2af2_46be_8356_6680e3a6ed51.slice - libcontainer container kubepods-besteffort-poddd97914c_2af2_46be_8356_6680e3a6ed51.slice. Sep 11 00:28:47.593101 kubelet[2711]: I0911 00:28:47.593033 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd97914c-2af2-46be-8356-6680e3a6ed51-tigera-ca-bundle\") pod \"calico-typha-695b6b6f8b-rpgfm\" (UID: \"dd97914c-2af2-46be-8356-6680e3a6ed51\") " pod="calico-system/calico-typha-695b6b6f8b-rpgfm" Sep 11 00:28:47.593101 kubelet[2711]: I0911 00:28:47.593088 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd97914c-2af2-46be-8356-6680e3a6ed51-typha-certs\") pod \"calico-typha-695b6b6f8b-rpgfm\" (UID: \"dd97914c-2af2-46be-8356-6680e3a6ed51\") " pod="calico-system/calico-typha-695b6b6f8b-rpgfm" Sep 11 00:28:47.593101 kubelet[2711]: I0911 00:28:47.593111 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r479n\" (UniqueName: \"kubernetes.io/projected/dd97914c-2af2-46be-8356-6680e3a6ed51-kube-api-access-r479n\") pod \"calico-typha-695b6b6f8b-rpgfm\" (UID: \"dd97914c-2af2-46be-8356-6680e3a6ed51\") " pod="calico-system/calico-typha-695b6b6f8b-rpgfm" Sep 11 00:28:47.884927 containerd[1591]: time="2025-09-11T00:28:47.884880675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-695b6b6f8b-rpgfm,Uid:dd97914c-2af2-46be-8356-6680e3a6ed51,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:47.997708 kubelet[2711]: I0911 00:28:47.996494 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fjb9\" (UniqueName: \"kubernetes.io/projected/04623916-5355-4f25-b522-5e00d998302c-kube-api-access-8fjb9\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998685 kubelet[2711]: I0911 00:28:47.998077 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-cni-log-dir\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998685 kubelet[2711]: I0911 00:28:47.998110 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-lib-modules\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998685 kubelet[2711]: I0911 00:28:47.998126 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-policysync\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998685 kubelet[2711]: I0911 00:28:47.998141 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-var-lib-calico\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998685 kubelet[2711]: I0911 00:28:47.998165 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-flexvol-driver-host\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998855 kubelet[2711]: I0911 00:28:47.998214 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04623916-5355-4f25-b522-5e00d998302c-tigera-ca-bundle\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998855 kubelet[2711]: I0911 00:28:47.998232 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-var-run-calico\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998855 kubelet[2711]: I0911 00:28:47.998251 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-xtables-lock\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998855 kubelet[2711]: I0911 00:28:47.998271 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-cni-bin-dir\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998855 kubelet[2711]: I0911 00:28:47.998284 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/04623916-5355-4f25-b522-5e00d998302c-cni-net-dir\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:47.998981 kubelet[2711]: I0911 00:28:47.998298 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/04623916-5355-4f25-b522-5e00d998302c-node-certs\") pod \"calico-node-cffjx\" (UID: \"04623916-5355-4f25-b522-5e00d998302c\") " pod="calico-system/calico-node-cffjx" Sep 11 00:28:48.000577 systemd[1]: Created slice kubepods-besteffort-pod04623916_5355_4f25_b522_5e00d998302c.slice - libcontainer container kubepods-besteffort-pod04623916_5355_4f25_b522_5e00d998302c.slice. Sep 11 00:28:48.004830 containerd[1591]: time="2025-09-11T00:28:48.004790096Z" level=info msg="connecting to shim d0c937d6bfcd606d3fabe5d8ea31b33827703243da362805fdc1e9d922e6d753" address="unix:///run/containerd/s/6e01cc7f031806d570c545e23365d2e9b6dfe0e020be1bde53f3c2af34ae329c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:48.059829 systemd[1]: Started cri-containerd-d0c937d6bfcd606d3fabe5d8ea31b33827703243da362805fdc1e9d922e6d753.scope - libcontainer container d0c937d6bfcd606d3fabe5d8ea31b33827703243da362805fdc1e9d922e6d753. Sep 11 00:28:48.101225 kubelet[2711]: E0911 00:28:48.101051 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.101225 kubelet[2711]: W0911 00:28:48.101078 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.101225 kubelet[2711]: E0911 00:28:48.101123 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.101799 kubelet[2711]: E0911 00:28:48.101731 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.101799 kubelet[2711]: W0911 00:28:48.101744 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.101799 kubelet[2711]: E0911 00:28:48.101754 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.104824 kubelet[2711]: E0911 00:28:48.104796 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.104824 kubelet[2711]: W0911 00:28:48.104810 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.105003 kubelet[2711]: E0911 00:28:48.104925 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.105231 kubelet[2711]: E0911 00:28:48.105219 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.105295 kubelet[2711]: W0911 00:28:48.105284 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.105649 kubelet[2711]: E0911 00:28:48.105427 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.105985 kubelet[2711]: E0911 00:28:48.105973 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.106050 kubelet[2711]: W0911 00:28:48.106039 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.106163 kubelet[2711]: E0911 00:28:48.106140 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.106391 kubelet[2711]: E0911 00:28:48.106379 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.106452 kubelet[2711]: W0911 00:28:48.106442 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.106521 kubelet[2711]: E0911 00:28:48.106508 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.114124 kubelet[2711]: E0911 00:28:48.114091 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.114124 kubelet[2711]: W0911 00:28:48.114115 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.114197 kubelet[2711]: E0911 00:28:48.114138 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.129526 containerd[1591]: time="2025-09-11T00:28:48.129463461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-695b6b6f8b-rpgfm,Uid:dd97914c-2af2-46be-8356-6680e3a6ed51,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0c937d6bfcd606d3fabe5d8ea31b33827703243da362805fdc1e9d922e6d753\"" Sep 11 00:28:48.131027 containerd[1591]: time="2025-09-11T00:28:48.131000503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:28:48.214592 kubelet[2711]: E0911 00:28:48.212438 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jqts" podUID="71c881f4-eafd-455f-8e99-5f408175a910" Sep 11 00:28:48.294293 kubelet[2711]: E0911 00:28:48.294254 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.294484 kubelet[2711]: W0911 00:28:48.294448 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.294484 kubelet[2711]: E0911 00:28:48.294477 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.294720 kubelet[2711]: E0911 00:28:48.294706 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.294720 kubelet[2711]: W0911 00:28:48.294715 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.294803 kubelet[2711]: E0911 00:28:48.294726 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.294978 kubelet[2711]: E0911 00:28:48.294960 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.294978 kubelet[2711]: W0911 00:28:48.294973 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.295114 kubelet[2711]: E0911 00:28:48.294983 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.295288 kubelet[2711]: E0911 00:28:48.295245 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.295288 kubelet[2711]: W0911 00:28:48.295260 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.295288 kubelet[2711]: E0911 00:28:48.295271 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.295589 kubelet[2711]: E0911 00:28:48.295558 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.295617 kubelet[2711]: W0911 00:28:48.295587 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.295644 kubelet[2711]: E0911 00:28:48.295619 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.295922 kubelet[2711]: E0911 00:28:48.295904 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.295922 kubelet[2711]: W0911 00:28:48.295916 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.296004 kubelet[2711]: E0911 00:28:48.295925 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.296145 kubelet[2711]: E0911 00:28:48.296118 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.296145 kubelet[2711]: W0911 00:28:48.296133 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.296145 kubelet[2711]: E0911 00:28:48.296144 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.296345 kubelet[2711]: E0911 00:28:48.296329 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.296345 kubelet[2711]: W0911 00:28:48.296340 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.296399 kubelet[2711]: E0911 00:28:48.296348 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.296536 kubelet[2711]: E0911 00:28:48.296520 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.296536 kubelet[2711]: W0911 00:28:48.296530 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.296580 kubelet[2711]: E0911 00:28:48.296538 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.296732 kubelet[2711]: E0911 00:28:48.296717 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.296732 kubelet[2711]: W0911 00:28:48.296728 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.296806 kubelet[2711]: E0911 00:28:48.296737 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.296923 kubelet[2711]: E0911 00:28:48.296907 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.296923 kubelet[2711]: W0911 00:28:48.296918 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.296970 kubelet[2711]: E0911 00:28:48.296926 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.297100 kubelet[2711]: E0911 00:28:48.297084 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.297100 kubelet[2711]: W0911 00:28:48.297095 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.297159 kubelet[2711]: E0911 00:28:48.297102 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.297277 kubelet[2711]: E0911 00:28:48.297262 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.297277 kubelet[2711]: W0911 00:28:48.297272 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.297323 kubelet[2711]: E0911 00:28:48.297280 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.297446 kubelet[2711]: E0911 00:28:48.297431 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.297446 kubelet[2711]: W0911 00:28:48.297442 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.297497 kubelet[2711]: E0911 00:28:48.297449 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.297621 kubelet[2711]: E0911 00:28:48.297606 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.297621 kubelet[2711]: W0911 00:28:48.297617 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.297685 kubelet[2711]: E0911 00:28:48.297625 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.297826 kubelet[2711]: E0911 00:28:48.297809 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.297826 kubelet[2711]: W0911 00:28:48.297820 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.297942 kubelet[2711]: E0911 00:28:48.297828 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.298012 kubelet[2711]: E0911 00:28:48.297997 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.298012 kubelet[2711]: W0911 00:28:48.298007 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.298064 kubelet[2711]: E0911 00:28:48.298014 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.298187 kubelet[2711]: E0911 00:28:48.298171 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.298187 kubelet[2711]: W0911 00:28:48.298184 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.298236 kubelet[2711]: E0911 00:28:48.298192 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.298360 kubelet[2711]: E0911 00:28:48.298345 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.298360 kubelet[2711]: W0911 00:28:48.298355 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.298409 kubelet[2711]: E0911 00:28:48.298363 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.298536 kubelet[2711]: E0911 00:28:48.298521 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.298536 kubelet[2711]: W0911 00:28:48.298531 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.298588 kubelet[2711]: E0911 00:28:48.298539 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.299806 kubelet[2711]: E0911 00:28:48.299779 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.299806 kubelet[2711]: W0911 00:28:48.299794 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.299806 kubelet[2711]: E0911 00:28:48.299805 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.299879 kubelet[2711]: I0911 00:28:48.299835 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vsp\" (UniqueName: \"kubernetes.io/projected/71c881f4-eafd-455f-8e99-5f408175a910-kube-api-access-42vsp\") pod \"csi-node-driver-6jqts\" (UID: \"71c881f4-eafd-455f-8e99-5f408175a910\") " pod="calico-system/csi-node-driver-6jqts" Sep 11 00:28:48.300031 kubelet[2711]: E0911 00:28:48.300014 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.300031 kubelet[2711]: W0911 00:28:48.300026 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.300088 kubelet[2711]: E0911 00:28:48.300040 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.300088 kubelet[2711]: I0911 00:28:48.300054 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71c881f4-eafd-455f-8e99-5f408175a910-kubelet-dir\") pod \"csi-node-driver-6jqts\" (UID: \"71c881f4-eafd-455f-8e99-5f408175a910\") " pod="calico-system/csi-node-driver-6jqts" Sep 11 00:28:48.300243 kubelet[2711]: E0911 00:28:48.300227 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.300243 kubelet[2711]: W0911 00:28:48.300238 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.300299 kubelet[2711]: E0911 00:28:48.300251 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.300299 kubelet[2711]: I0911 00:28:48.300267 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71c881f4-eafd-455f-8e99-5f408175a910-socket-dir\") pod \"csi-node-driver-6jqts\" (UID: \"71c881f4-eafd-455f-8e99-5f408175a910\") " pod="calico-system/csi-node-driver-6jqts" Sep 11 00:28:48.300468 kubelet[2711]: E0911 00:28:48.300451 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.300468 kubelet[2711]: W0911 00:28:48.300463 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.300525 kubelet[2711]: E0911 00:28:48.300478 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.300525 kubelet[2711]: I0911 00:28:48.300491 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71c881f4-eafd-455f-8e99-5f408175a910-varrun\") pod \"csi-node-driver-6jqts\" (UID: \"71c881f4-eafd-455f-8e99-5f408175a910\") " pod="calico-system/csi-node-driver-6jqts" Sep 11 00:28:48.300825 kubelet[2711]: E0911 00:28:48.300795 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.300862 kubelet[2711]: W0911 00:28:48.300823 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.300862 kubelet[2711]: E0911 00:28:48.300852 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.301040 kubelet[2711]: E0911 00:28:48.301023 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.301040 kubelet[2711]: W0911 00:28:48.301034 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.301097 kubelet[2711]: E0911 00:28:48.301048 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.301251 kubelet[2711]: E0911 00:28:48.301235 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.301251 kubelet[2711]: W0911 00:28:48.301246 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.301302 kubelet[2711]: E0911 00:28:48.301262 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.301452 kubelet[2711]: E0911 00:28:48.301437 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.301452 kubelet[2711]: W0911 00:28:48.301447 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.301500 kubelet[2711]: E0911 00:28:48.301462 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.301653 kubelet[2711]: E0911 00:28:48.301636 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.301653 kubelet[2711]: W0911 00:28:48.301647 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.301734 kubelet[2711]: E0911 00:28:48.301660 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.301873 kubelet[2711]: E0911 00:28:48.301855 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.301873 kubelet[2711]: W0911 00:28:48.301867 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.301936 kubelet[2711]: E0911 00:28:48.301897 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.302055 kubelet[2711]: E0911 00:28:48.302037 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.302055 kubelet[2711]: W0911 00:28:48.302049 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.302107 kubelet[2711]: E0911 00:28:48.302076 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.302107 kubelet[2711]: I0911 00:28:48.302100 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71c881f4-eafd-455f-8e99-5f408175a910-registration-dir\") pod \"csi-node-driver-6jqts\" (UID: \"71c881f4-eafd-455f-8e99-5f408175a910\") " pod="calico-system/csi-node-driver-6jqts" Sep 11 00:28:48.302244 kubelet[2711]: E0911 00:28:48.302228 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.302244 kubelet[2711]: W0911 00:28:48.302239 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.302292 kubelet[2711]: E0911 00:28:48.302252 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.302467 kubelet[2711]: E0911 00:28:48.302449 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.302467 kubelet[2711]: W0911 00:28:48.302463 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.302513 kubelet[2711]: E0911 00:28:48.302479 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.302683 kubelet[2711]: E0911 00:28:48.302653 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.302683 kubelet[2711]: W0911 00:28:48.302681 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.302741 kubelet[2711]: E0911 00:28:48.302691 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.302891 kubelet[2711]: E0911 00:28:48.302876 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.302891 kubelet[2711]: W0911 00:28:48.302887 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.302943 kubelet[2711]: E0911 00:28:48.302895 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.306572 containerd[1591]: time="2025-09-11T00:28:48.306521255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cffjx,Uid:04623916-5355-4f25-b522-5e00d998302c,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:48.332295 containerd[1591]: time="2025-09-11T00:28:48.332241713Z" level=info msg="connecting to shim b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff" address="unix:///run/containerd/s/2ba4642930e79297ca6a334647a72eaa6184ee8a891f40487599c80dfd87d96f" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:28:48.374840 systemd[1]: Started cri-containerd-b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff.scope - libcontainer container b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff. Sep 11 00:28:48.402720 kubelet[2711]: E0911 00:28:48.402688 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.402720 kubelet[2711]: W0911 00:28:48.402714 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.402874 kubelet[2711]: E0911 00:28:48.402734 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.403016 kubelet[2711]: E0911 00:28:48.402991 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.403016 kubelet[2711]: W0911 00:28:48.403005 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.403066 kubelet[2711]: E0911 00:28:48.403021 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.403252 kubelet[2711]: E0911 00:28:48.403234 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.403252 kubelet[2711]: W0911 00:28:48.403246 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.403314 kubelet[2711]: E0911 00:28:48.403261 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.403734 kubelet[2711]: E0911 00:28:48.403717 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.403734 kubelet[2711]: W0911 00:28:48.403729 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.403823 kubelet[2711]: E0911 00:28:48.403755 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.404244 kubelet[2711]: E0911 00:28:48.404185 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.404244 kubelet[2711]: W0911 00:28:48.404212 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.404512 kubelet[2711]: E0911 00:28:48.404258 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.404512 kubelet[2711]: E0911 00:28:48.404508 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.404592 kubelet[2711]: W0911 00:28:48.404519 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.404592 kubelet[2711]: E0911 00:28:48.404549 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.404801 kubelet[2711]: E0911 00:28:48.404766 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.404801 kubelet[2711]: W0911 00:28:48.404788 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.404870 kubelet[2711]: E0911 00:28:48.404826 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.404973 kubelet[2711]: E0911 00:28:48.404955 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.404973 kubelet[2711]: W0911 00:28:48.404966 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.405037 kubelet[2711]: E0911 00:28:48.404993 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.405142 kubelet[2711]: E0911 00:28:48.405125 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.405142 kubelet[2711]: W0911 00:28:48.405139 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.405197 kubelet[2711]: E0911 00:28:48.405168 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.405335 kubelet[2711]: E0911 00:28:48.405319 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.405335 kubelet[2711]: W0911 00:28:48.405331 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.405383 kubelet[2711]: E0911 00:28:48.405369 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.405610 kubelet[2711]: E0911 00:28:48.405588 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.405610 kubelet[2711]: W0911 00:28:48.405603 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.405763 kubelet[2711]: E0911 00:28:48.405626 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.405935 kubelet[2711]: E0911 00:28:48.405920 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.406003 kubelet[2711]: W0911 00:28:48.405982 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.406078 containerd[1591]: time="2025-09-11T00:28:48.406038245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cffjx,Uid:04623916-5355-4f25-b522-5e00d998302c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff\"" Sep 11 00:28:48.406136 kubelet[2711]: E0911 00:28:48.406081 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.406292 kubelet[2711]: E0911 00:28:48.406267 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.406335 kubelet[2711]: W0911 00:28:48.406307 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.406395 kubelet[2711]: E0911 00:28:48.406371 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.406605 kubelet[2711]: E0911 00:28:48.406588 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.406605 kubelet[2711]: W0911 00:28:48.406599 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.406605 kubelet[2711]: E0911 00:28:48.406613 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.406890 kubelet[2711]: E0911 00:28:48.406872 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.406890 kubelet[2711]: W0911 00:28:48.406884 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.406964 kubelet[2711]: E0911 00:28:48.406917 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.407088 kubelet[2711]: E0911 00:28:48.407071 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.407088 kubelet[2711]: W0911 00:28:48.407084 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.407138 kubelet[2711]: E0911 00:28:48.407105 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.407345 kubelet[2711]: E0911 00:28:48.407290 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.407345 kubelet[2711]: W0911 00:28:48.407307 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.407345 kubelet[2711]: E0911 00:28:48.407322 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.407584 kubelet[2711]: E0911 00:28:48.407532 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.407584 kubelet[2711]: W0911 00:28:48.407543 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.407630 kubelet[2711]: E0911 00:28:48.407595 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.407908 kubelet[2711]: E0911 00:28:48.407888 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.407908 kubelet[2711]: W0911 00:28:48.407902 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.408038 kubelet[2711]: E0911 00:28:48.408019 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.409022 kubelet[2711]: E0911 00:28:48.408315 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.409022 kubelet[2711]: W0911 00:28:48.408330 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.409022 kubelet[2711]: E0911 00:28:48.408454 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.409022 kubelet[2711]: E0911 00:28:48.408522 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.409022 kubelet[2711]: W0911 00:28:48.408529 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.409022 kubelet[2711]: E0911 00:28:48.408579 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.409022 kubelet[2711]: E0911 00:28:48.408739 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.409022 kubelet[2711]: W0911 00:28:48.408748 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.409022 kubelet[2711]: E0911 00:28:48.408831 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.409022 kubelet[2711]: E0911 00:28:48.408991 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.409267 kubelet[2711]: W0911 00:28:48.408998 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.409267 kubelet[2711]: E0911 00:28:48.409014 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.409267 kubelet[2711]: E0911 00:28:48.409194 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.409267 kubelet[2711]: W0911 00:28:48.409203 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.409267 kubelet[2711]: E0911 00:28:48.409217 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.409409 kubelet[2711]: E0911 00:28:48.409393 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.409409 kubelet[2711]: W0911 00:28:48.409404 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.409469 kubelet[2711]: E0911 00:28:48.409413 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:48.416693 kubelet[2711]: E0911 00:28:48.416643 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:48.416693 kubelet[2711]: W0911 00:28:48.416678 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:48.416693 kubelet[2711]: E0911 00:28:48.416700 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:49.561519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297611366.mount: Deactivated successfully. Sep 11 00:28:49.897999 kubelet[2711]: E0911 00:28:49.897906 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jqts" podUID="71c881f4-eafd-455f-8e99-5f408175a910" Sep 11 00:28:50.046079 containerd[1591]: time="2025-09-11T00:28:50.046018269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:50.046832 containerd[1591]: time="2025-09-11T00:28:50.046799844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:28:50.047913 containerd[1591]: time="2025-09-11T00:28:50.047889719Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:50.049573 containerd[1591]: time="2025-09-11T00:28:50.049537558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:50.050109 containerd[1591]: time="2025-09-11T00:28:50.050071515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.91888963s" Sep 11 00:28:50.050109 containerd[1591]: time="2025-09-11T00:28:50.050105840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:28:50.051441 containerd[1591]: time="2025-09-11T00:28:50.051390233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:28:50.060217 containerd[1591]: time="2025-09-11T00:28:50.060166638Z" level=info msg="CreateContainer within sandbox \"d0c937d6bfcd606d3fabe5d8ea31b33827703243da362805fdc1e9d922e6d753\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:28:50.068562 containerd[1591]: time="2025-09-11T00:28:50.068505408Z" level=info msg="Container 459767c3a31f241da44372e7c7b284ec8ebd8c7ad3540342d94b7b8b40f974aa: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:50.078417 containerd[1591]: time="2025-09-11T00:28:50.078366160Z" level=info msg="CreateContainer within sandbox \"d0c937d6bfcd606d3fabe5d8ea31b33827703243da362805fdc1e9d922e6d753\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"459767c3a31f241da44372e7c7b284ec8ebd8c7ad3540342d94b7b8b40f974aa\"" Sep 11 00:28:50.079700 containerd[1591]: time="2025-09-11T00:28:50.078800759Z" level=info msg="StartContainer for \"459767c3a31f241da44372e7c7b284ec8ebd8c7ad3540342d94b7b8b40f974aa\"" Sep 11 00:28:50.080266 containerd[1591]: time="2025-09-11T00:28:50.080230627Z" level=info msg="connecting to shim 459767c3a31f241da44372e7c7b284ec8ebd8c7ad3540342d94b7b8b40f974aa" address="unix:///run/containerd/s/6e01cc7f031806d570c545e23365d2e9b6dfe0e020be1bde53f3c2af34ae329c" protocol=ttrpc version=3 Sep 11 00:28:50.108875 systemd[1]: Started cri-containerd-459767c3a31f241da44372e7c7b284ec8ebd8c7ad3540342d94b7b8b40f974aa.scope - libcontainer container 459767c3a31f241da44372e7c7b284ec8ebd8c7ad3540342d94b7b8b40f974aa. Sep 11 00:28:50.165700 containerd[1591]: time="2025-09-11T00:28:50.165209258Z" level=info msg="StartContainer for \"459767c3a31f241da44372e7c7b284ec8ebd8c7ad3540342d94b7b8b40f974aa\" returns successfully" Sep 11 00:28:50.964172 kubelet[2711]: I0911 00:28:50.964090 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-695b6b6f8b-rpgfm" podStartSLOduration=2.043594169 podStartE2EDuration="3.964071875s" podCreationTimestamp="2025-09-11 00:28:47 +0000 UTC" firstStartedPulling="2025-09-11 00:28:48.130780909 +0000 UTC m=+17.317131793" lastFinishedPulling="2025-09-11 00:28:50.051258615 +0000 UTC m=+19.237609499" observedRunningTime="2025-09-11 00:28:50.963730873 +0000 UTC m=+20.150081787" watchObservedRunningTime="2025-09-11 00:28:50.964071875 +0000 UTC m=+20.150422769" Sep 11 00:28:51.013641 kubelet[2711]: E0911 00:28:51.013599 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.013641 kubelet[2711]: W0911 00:28:51.013625 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.013641 kubelet[2711]: E0911 00:28:51.013649 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.013926 kubelet[2711]: E0911 00:28:51.013904 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.013926 kubelet[2711]: W0911 00:28:51.013919 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.014039 kubelet[2711]: E0911 00:28:51.013931 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.014156 kubelet[2711]: E0911 00:28:51.014131 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.014156 kubelet[2711]: W0911 00:28:51.014144 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.014156 kubelet[2711]: E0911 00:28:51.014156 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.014430 kubelet[2711]: E0911 00:28:51.014395 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.014430 kubelet[2711]: W0911 00:28:51.014411 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.014430 kubelet[2711]: E0911 00:28:51.014423 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.014627 kubelet[2711]: E0911 00:28:51.014607 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.014627 kubelet[2711]: W0911 00:28:51.014620 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.014772 kubelet[2711]: E0911 00:28:51.014631 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.014860 kubelet[2711]: E0911 00:28:51.014836 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.014860 kubelet[2711]: W0911 00:28:51.014849 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.014860 kubelet[2711]: E0911 00:28:51.014859 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.015048 kubelet[2711]: E0911 00:28:51.015029 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.015048 kubelet[2711]: W0911 00:28:51.015043 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.015147 kubelet[2711]: E0911 00:28:51.015053 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.015249 kubelet[2711]: E0911 00:28:51.015226 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.015249 kubelet[2711]: W0911 00:28:51.015238 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.015249 kubelet[2711]: E0911 00:28:51.015248 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.015452 kubelet[2711]: E0911 00:28:51.015428 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.015452 kubelet[2711]: W0911 00:28:51.015439 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.015452 kubelet[2711]: E0911 00:28:51.015450 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.015700 kubelet[2711]: E0911 00:28:51.015654 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.015700 kubelet[2711]: W0911 00:28:51.015690 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.015700 kubelet[2711]: E0911 00:28:51.015701 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.015908 kubelet[2711]: E0911 00:28:51.015891 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.015908 kubelet[2711]: W0911 00:28:51.015906 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.016004 kubelet[2711]: E0911 00:28:51.015917 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.016120 kubelet[2711]: E0911 00:28:51.016106 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.016120 kubelet[2711]: W0911 00:28:51.016118 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.016199 kubelet[2711]: E0911 00:28:51.016128 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.016366 kubelet[2711]: E0911 00:28:51.016327 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.016366 kubelet[2711]: W0911 00:28:51.016344 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.016366 kubelet[2711]: E0911 00:28:51.016355 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.016550 kubelet[2711]: E0911 00:28:51.016532 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.016550 kubelet[2711]: W0911 00:28:51.016545 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.016626 kubelet[2711]: E0911 00:28:51.016555 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.016781 kubelet[2711]: E0911 00:28:51.016762 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.016781 kubelet[2711]: W0911 00:28:51.016775 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.016858 kubelet[2711]: E0911 00:28:51.016786 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.025240 kubelet[2711]: E0911 00:28:51.025188 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.025240 kubelet[2711]: W0911 00:28:51.025216 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.025240 kubelet[2711]: E0911 00:28:51.025235 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.025876 kubelet[2711]: E0911 00:28:51.025434 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.025876 kubelet[2711]: W0911 00:28:51.025448 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.025876 kubelet[2711]: E0911 00:28:51.025464 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.025876 kubelet[2711]: E0911 00:28:51.025811 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.025876 kubelet[2711]: W0911 00:28:51.025823 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.025876 kubelet[2711]: E0911 00:28:51.025835 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.026069 kubelet[2711]: E0911 00:28:51.026041 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.026069 kubelet[2711]: W0911 00:28:51.026057 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.026069 kubelet[2711]: E0911 00:28:51.026068 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.026283 kubelet[2711]: E0911 00:28:51.026265 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.026311 kubelet[2711]: W0911 00:28:51.026283 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.026311 kubelet[2711]: E0911 00:28:51.026294 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.026587 kubelet[2711]: E0911 00:28:51.026566 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.026587 kubelet[2711]: W0911 00:28:51.026580 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.026684 kubelet[2711]: E0911 00:28:51.026616 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.026808 kubelet[2711]: E0911 00:28:51.026788 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.026808 kubelet[2711]: W0911 00:28:51.026802 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.026923 kubelet[2711]: E0911 00:28:51.026847 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.026996 kubelet[2711]: E0911 00:28:51.026978 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.026996 kubelet[2711]: W0911 00:28:51.026988 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.027063 kubelet[2711]: E0911 00:28:51.027017 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.027210 kubelet[2711]: E0911 00:28:51.027183 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.027210 kubelet[2711]: W0911 00:28:51.027198 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.027259 kubelet[2711]: E0911 00:28:51.027213 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.027392 kubelet[2711]: E0911 00:28:51.027376 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.027392 kubelet[2711]: W0911 00:28:51.027388 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.027462 kubelet[2711]: E0911 00:28:51.027401 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.027560 kubelet[2711]: E0911 00:28:51.027545 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.027560 kubelet[2711]: W0911 00:28:51.027555 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.027615 kubelet[2711]: E0911 00:28:51.027566 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.027794 kubelet[2711]: E0911 00:28:51.027774 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.027794 kubelet[2711]: W0911 00:28:51.027789 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.027854 kubelet[2711]: E0911 00:28:51.027806 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.028031 kubelet[2711]: E0911 00:28:51.028010 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.028031 kubelet[2711]: W0911 00:28:51.028026 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.028110 kubelet[2711]: E0911 00:28:51.028042 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.028249 kubelet[2711]: E0911 00:28:51.028230 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.028249 kubelet[2711]: W0911 00:28:51.028243 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.028318 kubelet[2711]: E0911 00:28:51.028256 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.028457 kubelet[2711]: E0911 00:28:51.028436 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.028457 kubelet[2711]: W0911 00:28:51.028453 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.028508 kubelet[2711]: E0911 00:28:51.028469 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.028724 kubelet[2711]: E0911 00:28:51.028703 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.028724 kubelet[2711]: W0911 00:28:51.028720 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.028792 kubelet[2711]: E0911 00:28:51.028749 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.028958 kubelet[2711]: E0911 00:28:51.028941 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.028958 kubelet[2711]: W0911 00:28:51.028954 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.029013 kubelet[2711]: E0911 00:28:51.028967 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.029303 kubelet[2711]: E0911 00:28:51.029284 2711 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:28:51.029303 kubelet[2711]: W0911 00:28:51.029299 2711 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:28:51.029382 kubelet[2711]: E0911 00:28:51.029310 2711 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:28:51.749576 containerd[1591]: time="2025-09-11T00:28:51.749529315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:51.750242 containerd[1591]: time="2025-09-11T00:28:51.750208856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:28:51.751241 containerd[1591]: time="2025-09-11T00:28:51.751210455Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:51.753183 containerd[1591]: time="2025-09-11T00:28:51.753152748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:51.753728 containerd[1591]: time="2025-09-11T00:28:51.753678068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.702229765s" Sep 11 00:28:51.753759 containerd[1591]: time="2025-09-11T00:28:51.753720428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:28:51.755552 containerd[1591]: time="2025-09-11T00:28:51.755528156Z" level=info msg="CreateContainer within sandbox \"b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:28:51.764454 containerd[1591]: time="2025-09-11T00:28:51.764417910Z" level=info msg="Container 3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:51.773041 containerd[1591]: time="2025-09-11T00:28:51.772996556Z" level=info msg="CreateContainer within sandbox \"b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f\"" Sep 11 00:28:51.773494 containerd[1591]: time="2025-09-11T00:28:51.773467785Z" level=info msg="StartContainer for \"3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f\"" Sep 11 00:28:51.774779 containerd[1591]: time="2025-09-11T00:28:51.774752346Z" level=info msg="connecting to shim 3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f" address="unix:///run/containerd/s/2ba4642930e79297ca6a334647a72eaa6184ee8a891f40487599c80dfd87d96f" protocol=ttrpc version=3 Sep 11 00:28:51.805810 systemd[1]: Started cri-containerd-3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f.scope - libcontainer container 3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f. Sep 11 00:28:51.854579 containerd[1591]: time="2025-09-11T00:28:51.854491668Z" level=info msg="StartContainer for \"3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f\" returns successfully" Sep 11 00:28:51.862576 systemd[1]: cri-containerd-3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f.scope: Deactivated successfully. Sep 11 00:28:51.865956 containerd[1591]: time="2025-09-11T00:28:51.865915840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f\" id:\"3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f\" pid:3414 exited_at:{seconds:1757550531 nanos:865319104}" Sep 11 00:28:51.866010 containerd[1591]: time="2025-09-11T00:28:51.865971034Z" level=info msg="received exit event container_id:\"3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f\" id:\"3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f\" pid:3414 exited_at:{seconds:1757550531 nanos:865319104}" Sep 11 00:28:51.889895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d55ffb78cd23611b9e2ec441fef017eac80d58e30ba6a6b264db69fb1c7589f-rootfs.mount: Deactivated successfully. Sep 11 00:28:51.898307 kubelet[2711]: E0911 00:28:51.898264 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jqts" podUID="71c881f4-eafd-455f-8e99-5f408175a910" Sep 11 00:28:51.958279 kubelet[2711]: I0911 00:28:51.958228 2711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:28:52.965138 containerd[1591]: time="2025-09-11T00:28:52.965064665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:28:53.898088 kubelet[2711]: E0911 00:28:53.898016 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jqts" podUID="71c881f4-eafd-455f-8e99-5f408175a910" Sep 11 00:28:55.625126 containerd[1591]: time="2025-09-11T00:28:55.625064798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:55.625795 containerd[1591]: time="2025-09-11T00:28:55.625763494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:28:55.626842 containerd[1591]: time="2025-09-11T00:28:55.626816146Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:55.628998 containerd[1591]: time="2025-09-11T00:28:55.628963290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:28:55.629607 containerd[1591]: time="2025-09-11T00:28:55.629567017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.664458811s" Sep 11 00:28:55.629653 containerd[1591]: time="2025-09-11T00:28:55.629608685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:28:55.631970 containerd[1591]: time="2025-09-11T00:28:55.631938734Z" level=info msg="CreateContainer within sandbox \"b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:28:55.641389 containerd[1591]: time="2025-09-11T00:28:55.641340038Z" level=info msg="Container b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:28:55.651184 containerd[1591]: time="2025-09-11T00:28:55.651148249Z" level=info msg="CreateContainer within sandbox \"b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d\"" Sep 11 00:28:55.652232 containerd[1591]: time="2025-09-11T00:28:55.651584712Z" level=info msg="StartContainer for \"b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d\"" Sep 11 00:28:55.652919 containerd[1591]: time="2025-09-11T00:28:55.652888687Z" level=info msg="connecting to shim b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d" address="unix:///run/containerd/s/2ba4642930e79297ca6a334647a72eaa6184ee8a891f40487599c80dfd87d96f" protocol=ttrpc version=3 Sep 11 00:28:55.677855 systemd[1]: Started cri-containerd-b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d.scope - libcontainer container b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d. Sep 11 00:28:55.722896 containerd[1591]: time="2025-09-11T00:28:55.722849399Z" level=info msg="StartContainer for \"b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d\" returns successfully" Sep 11 00:28:55.898138 kubelet[2711]: E0911 00:28:55.898055 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jqts" podUID="71c881f4-eafd-455f-8e99-5f408175a910" Sep 11 00:28:56.816356 systemd[1]: cri-containerd-b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d.scope: Deactivated successfully. Sep 11 00:28:56.816711 systemd[1]: cri-containerd-b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d.scope: Consumed 619ms CPU time, 176.5M memory peak, 4.6M read from disk, 171.3M written to disk. Sep 11 00:28:56.819351 containerd[1591]: time="2025-09-11T00:28:56.819309787Z" level=info msg="received exit event container_id:\"b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d\" id:\"b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d\" pid:3475 exited_at:{seconds:1757550536 nanos:819088260}" Sep 11 00:28:56.820870 containerd[1591]: time="2025-09-11T00:28:56.819424683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d\" id:\"b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d\" pid:3475 exited_at:{seconds:1757550536 nanos:819088260}" Sep 11 00:28:56.844207 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6565a9d533fad175f47a7af64c6f80eaaee3e243e35451de0a525b15522d63d-rootfs.mount: Deactivated successfully. Sep 11 00:28:56.863264 kubelet[2711]: I0911 00:28:56.863223 2711 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:28:57.053527 systemd[1]: Created slice kubepods-burstable-pode3ee72d0_c689_456b_a71d_0ec4de2705b5.slice - libcontainer container kubepods-burstable-pode3ee72d0_c689_456b_a71d_0ec4de2705b5.slice. Sep 11 00:28:57.103616 kubelet[2711]: I0911 00:28:57.103465 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a5119758-2a9a-4f64-84be-5f6700dc2598-goldmane-key-pair\") pod \"goldmane-54d579b49d-gpd4x\" (UID: \"a5119758-2a9a-4f64-84be-5f6700dc2598\") " pod="calico-system/goldmane-54d579b49d-gpd4x" Sep 11 00:28:57.103616 kubelet[2711]: I0911 00:28:57.103508 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx7z7\" (UniqueName: \"kubernetes.io/projected/a5119758-2a9a-4f64-84be-5f6700dc2598-kube-api-access-fx7z7\") pod \"goldmane-54d579b49d-gpd4x\" (UID: \"a5119758-2a9a-4f64-84be-5f6700dc2598\") " pod="calico-system/goldmane-54d579b49d-gpd4x" Sep 11 00:28:57.103616 kubelet[2711]: I0911 00:28:57.103523 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfzbh\" (UniqueName: \"kubernetes.io/projected/6f56fe66-3e02-4c5d-b9e7-77afeda29f26-kube-api-access-bfzbh\") pod \"coredns-668d6bf9bc-zxwqr\" (UID: \"6f56fe66-3e02-4c5d-b9e7-77afeda29f26\") " pod="kube-system/coredns-668d6bf9bc-zxwqr" Sep 11 00:28:57.103616 kubelet[2711]: I0911 00:28:57.103539 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e829cab8-e382-4993-903a-ff19cd566794-tigera-ca-bundle\") pod \"calico-kube-controllers-686957df88-qdb9n\" (UID: \"e829cab8-e382-4993-903a-ff19cd566794\") " pod="calico-system/calico-kube-controllers-686957df88-qdb9n" Sep 11 00:28:57.103616 kubelet[2711]: I0911 00:28:57.103553 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq6zx\" (UniqueName: \"kubernetes.io/projected/d95b654e-0a4d-4034-b126-c19de88e04f1-kube-api-access-xq6zx\") pod \"whisker-84659c94b5-mtbrh\" (UID: \"d95b654e-0a4d-4034-b126-c19de88e04f1\") " pod="calico-system/whisker-84659c94b5-mtbrh" Sep 11 00:28:57.106699 kubelet[2711]: I0911 00:28:57.103569 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f56fe66-3e02-4c5d-b9e7-77afeda29f26-config-volume\") pod \"coredns-668d6bf9bc-zxwqr\" (UID: \"6f56fe66-3e02-4c5d-b9e7-77afeda29f26\") " pod="kube-system/coredns-668d6bf9bc-zxwqr" Sep 11 00:28:57.106699 kubelet[2711]: I0911 00:28:57.103588 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5119758-2a9a-4f64-84be-5f6700dc2598-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-gpd4x\" (UID: \"a5119758-2a9a-4f64-84be-5f6700dc2598\") " pod="calico-system/goldmane-54d579b49d-gpd4x" Sep 11 00:28:57.106699 kubelet[2711]: I0911 00:28:57.103604 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3ee72d0-c689-456b-a71d-0ec4de2705b5-config-volume\") pod \"coredns-668d6bf9bc-lw4ks\" (UID: \"e3ee72d0-c689-456b-a71d-0ec4de2705b5\") " pod="kube-system/coredns-668d6bf9bc-lw4ks" Sep 11 00:28:57.106699 kubelet[2711]: I0911 00:28:57.103617 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-ca-bundle\") pod \"whisker-84659c94b5-mtbrh\" (UID: \"d95b654e-0a4d-4034-b126-c19de88e04f1\") " pod="calico-system/whisker-84659c94b5-mtbrh" Sep 11 00:28:57.106699 kubelet[2711]: I0911 00:28:57.103630 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmv7m\" (UniqueName: \"kubernetes.io/projected/e3ee72d0-c689-456b-a71d-0ec4de2705b5-kube-api-access-cmv7m\") pod \"coredns-668d6bf9bc-lw4ks\" (UID: \"e3ee72d0-c689-456b-a71d-0ec4de2705b5\") " pod="kube-system/coredns-668d6bf9bc-lw4ks" Sep 11 00:28:57.106910 kubelet[2711]: I0911 00:28:57.103654 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5119758-2a9a-4f64-84be-5f6700dc2598-config\") pod \"goldmane-54d579b49d-gpd4x\" (UID: \"a5119758-2a9a-4f64-84be-5f6700dc2598\") " pod="calico-system/goldmane-54d579b49d-gpd4x" Sep 11 00:28:57.106910 kubelet[2711]: I0911 00:28:57.105730 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5t9v\" (UniqueName: \"kubernetes.io/projected/e829cab8-e382-4993-903a-ff19cd566794-kube-api-access-r5t9v\") pod \"calico-kube-controllers-686957df88-qdb9n\" (UID: \"e829cab8-e382-4993-903a-ff19cd566794\") " pod="calico-system/calico-kube-controllers-686957df88-qdb9n" Sep 11 00:28:57.106910 kubelet[2711]: I0911 00:28:57.105776 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-backend-key-pair\") pod \"whisker-84659c94b5-mtbrh\" (UID: \"d95b654e-0a4d-4034-b126-c19de88e04f1\") " pod="calico-system/whisker-84659c94b5-mtbrh" Sep 11 00:28:57.112041 systemd[1]: Created slice kubepods-besteffort-poda5119758_2a9a_4f64_84be_5f6700dc2598.slice - libcontainer container kubepods-besteffort-poda5119758_2a9a_4f64_84be_5f6700dc2598.slice. Sep 11 00:28:57.120298 systemd[1]: Created slice kubepods-burstable-pod6f56fe66_3e02_4c5d_b9e7_77afeda29f26.slice - libcontainer container kubepods-burstable-pod6f56fe66_3e02_4c5d_b9e7_77afeda29f26.slice. Sep 11 00:28:57.130097 systemd[1]: Created slice kubepods-besteffort-podd95b654e_0a4d_4034_b126_c19de88e04f1.slice - libcontainer container kubepods-besteffort-podd95b654e_0a4d_4034_b126_c19de88e04f1.slice. Sep 11 00:28:57.131728 containerd[1591]: time="2025-09-11T00:28:57.131696891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:28:57.139508 systemd[1]: Created slice kubepods-besteffort-pode829cab8_e382_4993_903a_ff19cd566794.slice - libcontainer container kubepods-besteffort-pode829cab8_e382_4993_903a_ff19cd566794.slice. Sep 11 00:28:57.145949 systemd[1]: Created slice kubepods-besteffort-pod4d9eab99_910e_4c64_99d1_bf4648e7e6e1.slice - libcontainer container kubepods-besteffort-pod4d9eab99_910e_4c64_99d1_bf4648e7e6e1.slice. Sep 11 00:28:57.151852 systemd[1]: Created slice kubepods-besteffort-podb3431f32_9912_44b2_854e_49665a06965b.slice - libcontainer container kubepods-besteffort-podb3431f32_9912_44b2_854e_49665a06965b.slice. Sep 11 00:28:57.207612 kubelet[2711]: I0911 00:28:57.206504 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxcq\" (UniqueName: \"kubernetes.io/projected/4d9eab99-910e-4c64-99d1-bf4648e7e6e1-kube-api-access-vnxcq\") pod \"calico-apiserver-5d877dc4f-2qzk9\" (UID: \"4d9eab99-910e-4c64-99d1-bf4648e7e6e1\") " pod="calico-apiserver/calico-apiserver-5d877dc4f-2qzk9" Sep 11 00:28:57.207612 kubelet[2711]: I0911 00:28:57.206585 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dx4z\" (UniqueName: \"kubernetes.io/projected/b3431f32-9912-44b2-854e-49665a06965b-kube-api-access-8dx4z\") pod \"calico-apiserver-5d877dc4f-8fmjk\" (UID: \"b3431f32-9912-44b2-854e-49665a06965b\") " pod="calico-apiserver/calico-apiserver-5d877dc4f-8fmjk" Sep 11 00:28:57.207612 kubelet[2711]: I0911 00:28:57.206601 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d9eab99-910e-4c64-99d1-bf4648e7e6e1-calico-apiserver-certs\") pod \"calico-apiserver-5d877dc4f-2qzk9\" (UID: \"4d9eab99-910e-4c64-99d1-bf4648e7e6e1\") " pod="calico-apiserver/calico-apiserver-5d877dc4f-2qzk9" Sep 11 00:28:57.207612 kubelet[2711]: I0911 00:28:57.206765 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b3431f32-9912-44b2-854e-49665a06965b-calico-apiserver-certs\") pod \"calico-apiserver-5d877dc4f-8fmjk\" (UID: \"b3431f32-9912-44b2-854e-49665a06965b\") " pod="calico-apiserver/calico-apiserver-5d877dc4f-8fmjk" Sep 11 00:28:57.357411 containerd[1591]: time="2025-09-11T00:28:57.357290070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lw4ks,Uid:e3ee72d0-c689-456b-a71d-0ec4de2705b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:57.418928 containerd[1591]: time="2025-09-11T00:28:57.418870779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gpd4x,Uid:a5119758-2a9a-4f64-84be-5f6700dc2598,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:57.426497 containerd[1591]: time="2025-09-11T00:28:57.426442751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxwqr,Uid:6f56fe66-3e02-4c5d-b9e7-77afeda29f26,Namespace:kube-system,Attempt:0,}" Sep 11 00:28:57.440562 containerd[1591]: time="2025-09-11T00:28:57.440108213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84659c94b5-mtbrh,Uid:d95b654e-0a4d-4034-b126-c19de88e04f1,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:57.444241 containerd[1591]: time="2025-09-11T00:28:57.444205726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686957df88-qdb9n,Uid:e829cab8-e382-4993-903a-ff19cd566794,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:57.450136 containerd[1591]: time="2025-09-11T00:28:57.450093809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-2qzk9,Uid:4d9eab99-910e-4c64-99d1-bf4648e7e6e1,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:57.456210 containerd[1591]: time="2025-09-11T00:28:57.456169014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-8fmjk,Uid:b3431f32-9912-44b2-854e-49665a06965b,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:28:57.524260 containerd[1591]: time="2025-09-11T00:28:57.524206997Z" level=error msg="Failed to destroy network for sandbox \"4ca95cf09362ba18751b96ca794d2f6e58e87dabd2659bccb4c537e0f45f9cbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.527463 containerd[1591]: time="2025-09-11T00:28:57.527308315Z" level=error msg="Failed to destroy network for sandbox \"9af374d556903433c1944864ea3d7ba804e1e625049d7cef23cba9bb413b2e89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.529104 containerd[1591]: time="2025-09-11T00:28:57.528915189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lw4ks,Uid:e3ee72d0-c689-456b-a71d-0ec4de2705b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca95cf09362ba18751b96ca794d2f6e58e87dabd2659bccb4c537e0f45f9cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.536038 containerd[1591]: time="2025-09-11T00:28:57.534068489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84659c94b5-mtbrh,Uid:d95b654e-0a4d-4034-b126-c19de88e04f1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af374d556903433c1944864ea3d7ba804e1e625049d7cef23cba9bb413b2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.543816 containerd[1591]: time="2025-09-11T00:28:57.543758648Z" level=error msg="Failed to destroy network for sandbox \"3397f5ed0509ea85eee85191aec13780b0a51ed29ae43e3c54f909d8ec3a44db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.552637 containerd[1591]: time="2025-09-11T00:28:57.552524377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxwqr,Uid:6f56fe66-3e02-4c5d-b9e7-77afeda29f26,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3397f5ed0509ea85eee85191aec13780b0a51ed29ae43e3c54f909d8ec3a44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.558425 kubelet[2711]: E0911 00:28:57.557975 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca95cf09362ba18751b96ca794d2f6e58e87dabd2659bccb4c537e0f45f9cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.558425 kubelet[2711]: E0911 00:28:57.558023 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af374d556903433c1944864ea3d7ba804e1e625049d7cef23cba9bb413b2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.558425 kubelet[2711]: E0911 00:28:57.558070 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca95cf09362ba18751b96ca794d2f6e58e87dabd2659bccb4c537e0f45f9cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lw4ks" Sep 11 00:28:57.558425 kubelet[2711]: E0911 00:28:57.558097 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ca95cf09362ba18751b96ca794d2f6e58e87dabd2659bccb4c537e0f45f9cbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lw4ks" Sep 11 00:28:57.558925 containerd[1591]: time="2025-09-11T00:28:57.558314154Z" level=error msg="Failed to destroy network for sandbox \"ead3458d59feac1643d54fc23532a2f99ff8494132f412676c30dab0bf282952\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.558972 kubelet[2711]: E0911 00:28:57.558113 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af374d556903433c1944864ea3d7ba804e1e625049d7cef23cba9bb413b2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84659c94b5-mtbrh" Sep 11 00:28:57.558972 kubelet[2711]: E0911 00:28:57.558135 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9af374d556903433c1944864ea3d7ba804e1e625049d7cef23cba9bb413b2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84659c94b5-mtbrh" Sep 11 00:28:57.558972 kubelet[2711]: E0911 00:28:57.558145 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lw4ks_kube-system(e3ee72d0-c689-456b-a71d-0ec4de2705b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lw4ks_kube-system(e3ee72d0-c689-456b-a71d-0ec4de2705b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ca95cf09362ba18751b96ca794d2f6e58e87dabd2659bccb4c537e0f45f9cbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lw4ks" podUID="e3ee72d0-c689-456b-a71d-0ec4de2705b5" Sep 11 00:28:57.559085 kubelet[2711]: E0911 00:28:57.558185 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84659c94b5-mtbrh_calico-system(d95b654e-0a4d-4034-b126-c19de88e04f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84659c94b5-mtbrh_calico-system(d95b654e-0a4d-4034-b126-c19de88e04f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9af374d556903433c1944864ea3d7ba804e1e625049d7cef23cba9bb413b2e89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84659c94b5-mtbrh" podUID="d95b654e-0a4d-4034-b126-c19de88e04f1" Sep 11 00:28:57.559085 kubelet[2711]: E0911 00:28:57.558444 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3397f5ed0509ea85eee85191aec13780b0a51ed29ae43e3c54f909d8ec3a44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.559085 kubelet[2711]: E0911 00:28:57.558477 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3397f5ed0509ea85eee85191aec13780b0a51ed29ae43e3c54f909d8ec3a44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zxwqr" Sep 11 00:28:57.559224 kubelet[2711]: E0911 00:28:57.558492 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3397f5ed0509ea85eee85191aec13780b0a51ed29ae43e3c54f909d8ec3a44db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zxwqr" Sep 11 00:28:57.559224 kubelet[2711]: E0911 00:28:57.558521 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zxwqr_kube-system(6f56fe66-3e02-4c5d-b9e7-77afeda29f26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zxwqr_kube-system(6f56fe66-3e02-4c5d-b9e7-77afeda29f26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3397f5ed0509ea85eee85191aec13780b0a51ed29ae43e3c54f909d8ec3a44db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zxwqr" podUID="6f56fe66-3e02-4c5d-b9e7-77afeda29f26" Sep 11 00:28:57.563813 containerd[1591]: time="2025-09-11T00:28:57.563540572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gpd4x,Uid:a5119758-2a9a-4f64-84be-5f6700dc2598,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead3458d59feac1643d54fc23532a2f99ff8494132f412676c30dab0bf282952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.564083 kubelet[2711]: E0911 00:28:57.564028 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead3458d59feac1643d54fc23532a2f99ff8494132f412676c30dab0bf282952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.564149 kubelet[2711]: E0911 00:28:57.564098 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead3458d59feac1643d54fc23532a2f99ff8494132f412676c30dab0bf282952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-gpd4x" Sep 11 00:28:57.564149 kubelet[2711]: E0911 00:28:57.564119 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead3458d59feac1643d54fc23532a2f99ff8494132f412676c30dab0bf282952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-gpd4x" Sep 11 00:28:57.564217 kubelet[2711]: E0911 00:28:57.564185 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-gpd4x_calico-system(a5119758-2a9a-4f64-84be-5f6700dc2598)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-gpd4x_calico-system(a5119758-2a9a-4f64-84be-5f6700dc2598)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ead3458d59feac1643d54fc23532a2f99ff8494132f412676c30dab0bf282952\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-gpd4x" podUID="a5119758-2a9a-4f64-84be-5f6700dc2598" Sep 11 00:28:57.566562 containerd[1591]: time="2025-09-11T00:28:57.566518958Z" level=error msg="Failed to destroy network for sandbox \"445f0317d127875d8ef6fbdb0b8f63bb0b31b1a278308c9f0db9df9fee59bf96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.568288 containerd[1591]: time="2025-09-11T00:28:57.568219499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686957df88-qdb9n,Uid:e829cab8-e382-4993-903a-ff19cd566794,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f0317d127875d8ef6fbdb0b8f63bb0b31b1a278308c9f0db9df9fee59bf96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.568537 kubelet[2711]: E0911 00:28:57.568490 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f0317d127875d8ef6fbdb0b8f63bb0b31b1a278308c9f0db9df9fee59bf96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.568604 kubelet[2711]: E0911 00:28:57.568556 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f0317d127875d8ef6fbdb0b8f63bb0b31b1a278308c9f0db9df9fee59bf96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-686957df88-qdb9n" Sep 11 00:28:57.568604 kubelet[2711]: E0911 00:28:57.568577 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445f0317d127875d8ef6fbdb0b8f63bb0b31b1a278308c9f0db9df9fee59bf96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-686957df88-qdb9n" Sep 11 00:28:57.568786 kubelet[2711]: E0911 00:28:57.568714 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-686957df88-qdb9n_calico-system(e829cab8-e382-4993-903a-ff19cd566794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-686957df88-qdb9n_calico-system(e829cab8-e382-4993-903a-ff19cd566794)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"445f0317d127875d8ef6fbdb0b8f63bb0b31b1a278308c9f0db9df9fee59bf96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-686957df88-qdb9n" podUID="e829cab8-e382-4993-903a-ff19cd566794" Sep 11 00:28:57.578789 containerd[1591]: time="2025-09-11T00:28:57.578735342Z" level=error msg="Failed to destroy network for sandbox \"77398c6be75abfb333a347c6f17082a858eff818bdbfe2b7a9a76f8180361402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.580061 containerd[1591]: time="2025-09-11T00:28:57.580008880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-2qzk9,Uid:4d9eab99-910e-4c64-99d1-bf4648e7e6e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77398c6be75abfb333a347c6f17082a858eff818bdbfe2b7a9a76f8180361402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.580256 kubelet[2711]: E0911 00:28:57.580212 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77398c6be75abfb333a347c6f17082a858eff818bdbfe2b7a9a76f8180361402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.580309 kubelet[2711]: E0911 00:28:57.580271 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77398c6be75abfb333a347c6f17082a858eff818bdbfe2b7a9a76f8180361402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d877dc4f-2qzk9" Sep 11 00:28:57.580309 kubelet[2711]: E0911 00:28:57.580294 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77398c6be75abfb333a347c6f17082a858eff818bdbfe2b7a9a76f8180361402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d877dc4f-2qzk9" Sep 11 00:28:57.580366 kubelet[2711]: E0911 00:28:57.580337 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d877dc4f-2qzk9_calico-apiserver(4d9eab99-910e-4c64-99d1-bf4648e7e6e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d877dc4f-2qzk9_calico-apiserver(4d9eab99-910e-4c64-99d1-bf4648e7e6e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77398c6be75abfb333a347c6f17082a858eff818bdbfe2b7a9a76f8180361402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d877dc4f-2qzk9" podUID="4d9eab99-910e-4c64-99d1-bf4648e7e6e1" Sep 11 00:28:57.589995 containerd[1591]: time="2025-09-11T00:28:57.589941655Z" level=error msg="Failed to destroy network for sandbox \"fa1961500036664712505f74c335c04a2b13db79e365b456d681dc4d4f500d0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.591280 containerd[1591]: time="2025-09-11T00:28:57.591218338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-8fmjk,Uid:b3431f32-9912-44b2-854e-49665a06965b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa1961500036664712505f74c335c04a2b13db79e365b456d681dc4d4f500d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.591532 kubelet[2711]: E0911 00:28:57.591485 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa1961500036664712505f74c335c04a2b13db79e365b456d681dc4d4f500d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.591612 kubelet[2711]: E0911 00:28:57.591557 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa1961500036664712505f74c335c04a2b13db79e365b456d681dc4d4f500d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d877dc4f-8fmjk" Sep 11 00:28:57.591612 kubelet[2711]: E0911 00:28:57.591579 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa1961500036664712505f74c335c04a2b13db79e365b456d681dc4d4f500d0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d877dc4f-8fmjk" Sep 11 00:28:57.591737 kubelet[2711]: E0911 00:28:57.591617 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d877dc4f-8fmjk_calico-apiserver(b3431f32-9912-44b2-854e-49665a06965b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d877dc4f-8fmjk_calico-apiserver(b3431f32-9912-44b2-854e-49665a06965b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa1961500036664712505f74c335c04a2b13db79e365b456d681dc4d4f500d0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d877dc4f-8fmjk" podUID="b3431f32-9912-44b2-854e-49665a06965b" Sep 11 00:28:57.903830 systemd[1]: Created slice kubepods-besteffort-pod71c881f4_eafd_455f_8e99_5f408175a910.slice - libcontainer container kubepods-besteffort-pod71c881f4_eafd_455f_8e99_5f408175a910.slice. Sep 11 00:28:57.906297 containerd[1591]: time="2025-09-11T00:28:57.906263467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jqts,Uid:71c881f4-eafd-455f-8e99-5f408175a910,Namespace:calico-system,Attempt:0,}" Sep 11 00:28:57.962456 containerd[1591]: time="2025-09-11T00:28:57.962395721Z" level=error msg="Failed to destroy network for sandbox \"1e6a24eaec2d25e0f11374f6a2138ffb059341799ac3a0bc4e19eac3b546b15f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.964120 containerd[1591]: time="2025-09-11T00:28:57.964070975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jqts,Uid:71c881f4-eafd-455f-8e99-5f408175a910,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6a24eaec2d25e0f11374f6a2138ffb059341799ac3a0bc4e19eac3b546b15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.964472 kubelet[2711]: E0911 00:28:57.964431 2711 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6a24eaec2d25e0f11374f6a2138ffb059341799ac3a0bc4e19eac3b546b15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:28:57.964544 kubelet[2711]: E0911 00:28:57.964489 2711 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6a24eaec2d25e0f11374f6a2138ffb059341799ac3a0bc4e19eac3b546b15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jqts" Sep 11 00:28:57.964583 kubelet[2711]: E0911 00:28:57.964567 2711 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6a24eaec2d25e0f11374f6a2138ffb059341799ac3a0bc4e19eac3b546b15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jqts" Sep 11 00:28:57.964748 kubelet[2711]: E0911 00:28:57.964651 2711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6jqts_calico-system(71c881f4-eafd-455f-8e99-5f408175a910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6jqts_calico-system(71c881f4-eafd-455f-8e99-5f408175a910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e6a24eaec2d25e0f11374f6a2138ffb059341799ac3a0bc4e19eac3b546b15f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6jqts" podUID="71c881f4-eafd-455f-8e99-5f408175a910" Sep 11 00:28:57.964999 systemd[1]: run-netns-cni\x2d124d8bfa\x2da732\x2d1418\x2d733c\x2d2eaf0e8501df.mount: Deactivated successfully. Sep 11 00:29:03.482742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1945984100.mount: Deactivated successfully. Sep 11 00:29:04.453250 containerd[1591]: time="2025-09-11T00:29:04.453168374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:04.454228 containerd[1591]: time="2025-09-11T00:29:04.454194212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:29:04.455679 containerd[1591]: time="2025-09-11T00:29:04.455626424Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:04.458076 containerd[1591]: time="2025-09-11T00:29:04.458028188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:04.458557 containerd[1591]: time="2025-09-11T00:29:04.458518470Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.326663882s" Sep 11 00:29:04.458600 containerd[1591]: time="2025-09-11T00:29:04.458555379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:29:04.472704 containerd[1591]: time="2025-09-11T00:29:04.470537942Z" level=info msg="CreateContainer within sandbox \"b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:29:04.494314 containerd[1591]: time="2025-09-11T00:29:04.494263827Z" level=info msg="Container eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:04.506420 containerd[1591]: time="2025-09-11T00:29:04.506367687Z" level=info msg="CreateContainer within sandbox \"b42919e0664de92b6871c4b72c89a4619f45feee9786727a2e17040175d93eff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\"" Sep 11 00:29:04.507070 containerd[1591]: time="2025-09-11T00:29:04.507014312Z" level=info msg="StartContainer for \"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\"" Sep 11 00:29:04.508489 containerd[1591]: time="2025-09-11T00:29:04.508434222Z" level=info msg="connecting to shim eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0" address="unix:///run/containerd/s/2ba4642930e79297ca6a334647a72eaa6184ee8a891f40487599c80dfd87d96f" protocol=ttrpc version=3 Sep 11 00:29:04.552825 systemd[1]: Started cri-containerd-eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0.scope - libcontainer container eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0. Sep 11 00:29:04.604480 containerd[1591]: time="2025-09-11T00:29:04.604428383Z" level=info msg="StartContainer for \"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\" returns successfully" Sep 11 00:29:04.686699 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:29:04.686870 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:29:04.860614 kubelet[2711]: I0911 00:29:04.860434 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-backend-key-pair\") pod \"d95b654e-0a4d-4034-b126-c19de88e04f1\" (UID: \"d95b654e-0a4d-4034-b126-c19de88e04f1\") " Sep 11 00:29:04.860614 kubelet[2711]: I0911 00:29:04.860508 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-ca-bundle\") pod \"d95b654e-0a4d-4034-b126-c19de88e04f1\" (UID: \"d95b654e-0a4d-4034-b126-c19de88e04f1\") " Sep 11 00:29:04.860614 kubelet[2711]: I0911 00:29:04.860541 2711 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq6zx\" (UniqueName: \"kubernetes.io/projected/d95b654e-0a4d-4034-b126-c19de88e04f1-kube-api-access-xq6zx\") pod \"d95b654e-0a4d-4034-b126-c19de88e04f1\" (UID: \"d95b654e-0a4d-4034-b126-c19de88e04f1\") " Sep 11 00:29:04.861841 kubelet[2711]: I0911 00:29:04.861372 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d95b654e-0a4d-4034-b126-c19de88e04f1" (UID: "d95b654e-0a4d-4034-b126-c19de88e04f1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:29:04.867285 kubelet[2711]: I0911 00:29:04.865445 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d95b654e-0a4d-4034-b126-c19de88e04f1" (UID: "d95b654e-0a4d-4034-b126-c19de88e04f1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:29:04.866766 systemd[1]: var-lib-kubelet-pods-d95b654e\x2d0a4d\x2d4034\x2db126\x2dc19de88e04f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxq6zx.mount: Deactivated successfully. Sep 11 00:29:04.866880 systemd[1]: var-lib-kubelet-pods-d95b654e\x2d0a4d\x2d4034\x2db126\x2dc19de88e04f1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:29:04.869786 kubelet[2711]: I0911 00:29:04.869718 2711 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95b654e-0a4d-4034-b126-c19de88e04f1-kube-api-access-xq6zx" (OuterVolumeSpecName: "kube-api-access-xq6zx") pod "d95b654e-0a4d-4034-b126-c19de88e04f1" (UID: "d95b654e-0a4d-4034-b126-c19de88e04f1"). InnerVolumeSpecName "kube-api-access-xq6zx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:29:04.908775 systemd[1]: Removed slice kubepods-besteffort-podd95b654e_0a4d_4034_b126_c19de88e04f1.slice - libcontainer container kubepods-besteffort-podd95b654e_0a4d_4034_b126_c19de88e04f1.slice. Sep 11 00:29:04.961365 kubelet[2711]: I0911 00:29:04.961077 2711 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xq6zx\" (UniqueName: \"kubernetes.io/projected/d95b654e-0a4d-4034-b126-c19de88e04f1-kube-api-access-xq6zx\") on node \"localhost\" DevicePath \"\"" Sep 11 00:29:04.961365 kubelet[2711]: I0911 00:29:04.961126 2711 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:29:04.961365 kubelet[2711]: I0911 00:29:04.961141 2711 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d95b654e-0a4d-4034-b126-c19de88e04f1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:29:05.193361 kubelet[2711]: I0911 00:29:05.193040 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cffjx" podStartSLOduration=2.14134654 podStartE2EDuration="18.193019115s" podCreationTimestamp="2025-09-11 00:28:47 +0000 UTC" firstStartedPulling="2025-09-11 00:28:48.407477563 +0000 UTC m=+17.593828457" lastFinishedPulling="2025-09-11 00:29:04.459150138 +0000 UTC m=+33.645501032" observedRunningTime="2025-09-11 00:29:05.17505261 +0000 UTC m=+34.361403504" watchObservedRunningTime="2025-09-11 00:29:05.193019115 +0000 UTC m=+34.379369999" Sep 11 00:29:05.259102 systemd[1]: Created slice kubepods-besteffort-pod5436562f_7f41_4653_bb23_32339b19b73a.slice - libcontainer container kubepods-besteffort-pod5436562f_7f41_4653_bb23_32339b19b73a.slice. Sep 11 00:29:05.262754 kubelet[2711]: I0911 00:29:05.262735 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5436562f-7f41-4653-bb23-32339b19b73a-whisker-backend-key-pair\") pod \"whisker-57f987cb84-zcc2x\" (UID: \"5436562f-7f41-4653-bb23-32339b19b73a\") " pod="calico-system/whisker-57f987cb84-zcc2x" Sep 11 00:29:05.263657 kubelet[2711]: I0911 00:29:05.263637 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9g2\" (UniqueName: \"kubernetes.io/projected/5436562f-7f41-4653-bb23-32339b19b73a-kube-api-access-nw9g2\") pod \"whisker-57f987cb84-zcc2x\" (UID: \"5436562f-7f41-4653-bb23-32339b19b73a\") " pod="calico-system/whisker-57f987cb84-zcc2x" Sep 11 00:29:05.263980 kubelet[2711]: I0911 00:29:05.263935 2711 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5436562f-7f41-4653-bb23-32339b19b73a-whisker-ca-bundle\") pod \"whisker-57f987cb84-zcc2x\" (UID: \"5436562f-7f41-4653-bb23-32339b19b73a\") " pod="calico-system/whisker-57f987cb84-zcc2x" Sep 11 00:29:05.278682 containerd[1591]: time="2025-09-11T00:29:05.278614407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\" id:\"75a9ce54dbb477e36a86fd309a69bd3e5ef119a915c2b34c300eeb6b52b59021\" pid:3955 exit_status:1 exited_at:{seconds:1757550545 nanos:278208193}" Sep 11 00:29:05.565841 containerd[1591]: time="2025-09-11T00:29:05.565630060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57f987cb84-zcc2x,Uid:5436562f-7f41-4653-bb23-32339b19b73a,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:05.784753 systemd-networkd[1487]: caliba4c4ee42e1: Link UP Sep 11 00:29:05.785340 systemd-networkd[1487]: caliba4c4ee42e1: Gained carrier Sep 11 00:29:05.800525 containerd[1591]: 2025-09-11 00:29:05.637 [INFO][3970] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:05.800525 containerd[1591]: 2025-09-11 00:29:05.657 [INFO][3970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57f987cb84--zcc2x-eth0 whisker-57f987cb84- calico-system 5436562f-7f41-4653-bb23-32339b19b73a 893 0 2025-09-11 00:29:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57f987cb84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57f987cb84-zcc2x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliba4c4ee42e1 [] [] }} ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-" Sep 11 00:29:05.800525 containerd[1591]: 2025-09-11 00:29:05.657 [INFO][3970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" Sep 11 00:29:05.800525 containerd[1591]: 2025-09-11 00:29:05.732 [INFO][3985] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" HandleID="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Workload="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.732 [INFO][3985] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" HandleID="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Workload="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57f987cb84-zcc2x", "timestamp":"2025-09-11 00:29:05.732253709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.733 [INFO][3985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.733 [INFO][3985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.733 [INFO][3985] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.743 [INFO][3985] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" host="localhost" Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.750 [INFO][3985] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.754 [INFO][3985] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.756 [INFO][3985] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.757 [INFO][3985] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:05.800856 containerd[1591]: 2025-09-11 00:29:05.758 [INFO][3985] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" host="localhost" Sep 11 00:29:05.801141 containerd[1591]: 2025-09-11 00:29:05.759 [INFO][3985] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e Sep 11 00:29:05.801141 containerd[1591]: 2025-09-11 00:29:05.763 [INFO][3985] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" host="localhost" Sep 11 00:29:05.801141 containerd[1591]: 2025-09-11 00:29:05.772 [INFO][3985] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" host="localhost" Sep 11 00:29:05.801141 containerd[1591]: 2025-09-11 00:29:05.772 [INFO][3985] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" host="localhost" Sep 11 00:29:05.801141 containerd[1591]: 2025-09-11 00:29:05.772 [INFO][3985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:05.801141 containerd[1591]: 2025-09-11 00:29:05.772 [INFO][3985] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" HandleID="k8s-pod-network.535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Workload="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" Sep 11 00:29:05.801293 containerd[1591]: 2025-09-11 00:29:05.775 [INFO][3970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57f987cb84--zcc2x-eth0", GenerateName:"whisker-57f987cb84-", Namespace:"calico-system", SelfLink:"", UID:"5436562f-7f41-4653-bb23-32339b19b73a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57f987cb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57f987cb84-zcc2x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba4c4ee42e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:05.801293 containerd[1591]: 2025-09-11 00:29:05.776 [INFO][3970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" Sep 11 00:29:05.801403 containerd[1591]: 2025-09-11 00:29:05.776 [INFO][3970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba4c4ee42e1 ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" Sep 11 00:29:05.801403 containerd[1591]: 2025-09-11 00:29:05.785 [INFO][3970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" Sep 11 00:29:05.801462 containerd[1591]: 2025-09-11 00:29:05.785 [INFO][3970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57f987cb84--zcc2x-eth0", GenerateName:"whisker-57f987cb84-", Namespace:"calico-system", SelfLink:"", UID:"5436562f-7f41-4653-bb23-32339b19b73a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 29, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57f987cb84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e", Pod:"whisker-57f987cb84-zcc2x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba4c4ee42e1", MAC:"aa:4c:ce:4b:7a:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:05.801527 containerd[1591]: 2025-09-11 00:29:05.796 [INFO][3970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" Namespace="calico-system" Pod="whisker-57f987cb84-zcc2x" WorkloadEndpoint="localhost-k8s-whisker--57f987cb84--zcc2x-eth0" Sep 11 00:29:05.839501 containerd[1591]: time="2025-09-11T00:29:05.838844386Z" level=info msg="connecting to shim 535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e" address="unix:///run/containerd/s/9643b22e86d031d630df9822ea301115d2d66cd374305529a1a3c19cae68fc49" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:05.871922 systemd[1]: Started cri-containerd-535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e.scope - libcontainer container 535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e. Sep 11 00:29:05.887801 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:05.927880 containerd[1591]: time="2025-09-11T00:29:05.927821282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57f987cb84-zcc2x,Uid:5436562f-7f41-4653-bb23-32339b19b73a,Namespace:calico-system,Attempt:0,} returns sandbox id \"535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e\"" Sep 11 00:29:05.929555 containerd[1591]: time="2025-09-11T00:29:05.929478927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:29:06.266101 containerd[1591]: time="2025-09-11T00:29:06.266042185Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\" id:\"021105640853095adce8eaede1682f73d7eb505ca301acdc1e7b84b8b49e1076\" pid:4066 exit_status:1 exited_at:{seconds:1757550546 nanos:265412492}" Sep 11 00:29:06.901051 kubelet[2711]: I0911 00:29:06.900990 2711 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95b654e-0a4d-4034-b126-c19de88e04f1" path="/var/lib/kubelet/pods/d95b654e-0a4d-4034-b126-c19de88e04f1/volumes" Sep 11 00:29:06.949841 systemd-networkd[1487]: caliba4c4ee42e1: Gained IPv6LL Sep 11 00:29:07.251589 containerd[1591]: time="2025-09-11T00:29:07.251380620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\" id:\"258f3136ed32f5482e15283691f10aab4102730bc1a77a774c0ac3b1a43447d5\" pid:4109 exit_status:1 exited_at:{seconds:1757550547 nanos:251037284}" Sep 11 00:29:07.425029 containerd[1591]: time="2025-09-11T00:29:07.424976791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:07.425835 containerd[1591]: time="2025-09-11T00:29:07.425805438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:29:07.427093 containerd[1591]: time="2025-09-11T00:29:07.427025750Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:07.429066 containerd[1591]: time="2025-09-11T00:29:07.429029215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:07.429616 containerd[1591]: time="2025-09-11T00:29:07.429587403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.500075434s" Sep 11 00:29:07.429657 containerd[1591]: time="2025-09-11T00:29:07.429617359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:29:07.431273 containerd[1591]: time="2025-09-11T00:29:07.431237223Z" level=info msg="CreateContainer within sandbox \"535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:29:07.438830 containerd[1591]: time="2025-09-11T00:29:07.438775666Z" level=info msg="Container a63a7fb8770c203cdb1b9fdfd0130697a4343c87cfb0a3a76be6cc6b23aac750: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:07.449689 containerd[1591]: time="2025-09-11T00:29:07.449634718Z" level=info msg="CreateContainer within sandbox \"535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a63a7fb8770c203cdb1b9fdfd0130697a4343c87cfb0a3a76be6cc6b23aac750\"" Sep 11 00:29:07.450284 containerd[1591]: time="2025-09-11T00:29:07.450228274Z" level=info msg="StartContainer for \"a63a7fb8770c203cdb1b9fdfd0130697a4343c87cfb0a3a76be6cc6b23aac750\"" Sep 11 00:29:07.451394 containerd[1591]: time="2025-09-11T00:29:07.451367393Z" level=info msg="connecting to shim a63a7fb8770c203cdb1b9fdfd0130697a4343c87cfb0a3a76be6cc6b23aac750" address="unix:///run/containerd/s/9643b22e86d031d630df9822ea301115d2d66cd374305529a1a3c19cae68fc49" protocol=ttrpc version=3 Sep 11 00:29:07.471827 systemd[1]: Started cri-containerd-a63a7fb8770c203cdb1b9fdfd0130697a4343c87cfb0a3a76be6cc6b23aac750.scope - libcontainer container a63a7fb8770c203cdb1b9fdfd0130697a4343c87cfb0a3a76be6cc6b23aac750. Sep 11 00:29:07.521754 containerd[1591]: time="2025-09-11T00:29:07.521580880Z" level=info msg="StartContainer for \"a63a7fb8770c203cdb1b9fdfd0130697a4343c87cfb0a3a76be6cc6b23aac750\" returns successfully" Sep 11 00:29:07.524835 containerd[1591]: time="2025-09-11T00:29:07.524801030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:29:08.898558 containerd[1591]: time="2025-09-11T00:29:08.898509413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jqts,Uid:71c881f4-eafd-455f-8e99-5f408175a910,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:09.007228 systemd-networkd[1487]: cali8ec201a3667: Link UP Sep 11 00:29:09.007458 systemd-networkd[1487]: cali8ec201a3667: Gained carrier Sep 11 00:29:09.028783 containerd[1591]: 2025-09-11 00:29:08.927 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:09.028783 containerd[1591]: 2025-09-11 00:29:08.940 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6jqts-eth0 csi-node-driver- calico-system 71c881f4-eafd-455f-8e99-5f408175a910 715 0 2025-09-11 00:28:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6jqts eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ec201a3667 [] [] }} ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-" Sep 11 00:29:09.028783 containerd[1591]: 2025-09-11 00:29:08.941 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-eth0" Sep 11 00:29:09.028783 containerd[1591]: 2025-09-11 00:29:08.970 [INFO][4223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" HandleID="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Workload="localhost-k8s-csi--node--driver--6jqts-eth0" Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.971 [INFO][4223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" HandleID="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Workload="localhost-k8s-csi--node--driver--6jqts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6jqts", "timestamp":"2025-09-11 00:29:08.970968572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.971 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.971 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.971 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.979 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" host="localhost" Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.983 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.987 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.989 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.991 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:09.029113 containerd[1591]: 2025-09-11 00:29:08.991 [INFO][4223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" host="localhost" Sep 11 00:29:09.029429 containerd[1591]: 2025-09-11 00:29:08.992 [INFO][4223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0 Sep 11 00:29:09.029429 containerd[1591]: 2025-09-11 00:29:08.996 [INFO][4223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" host="localhost" Sep 11 00:29:09.029429 containerd[1591]: 2025-09-11 00:29:09.001 [INFO][4223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" host="localhost" Sep 11 00:29:09.029429 containerd[1591]: 2025-09-11 00:29:09.001 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" host="localhost" Sep 11 00:29:09.029429 containerd[1591]: 2025-09-11 00:29:09.001 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:09.029429 containerd[1591]: 2025-09-11 00:29:09.001 [INFO][4223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" HandleID="k8s-pod-network.3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Workload="localhost-k8s-csi--node--driver--6jqts-eth0" Sep 11 00:29:09.029610 containerd[1591]: 2025-09-11 00:29:09.004 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6jqts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71c881f4-eafd-455f-8e99-5f408175a910", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6jqts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ec201a3667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:09.029750 containerd[1591]: 2025-09-11 00:29:09.004 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-eth0" Sep 11 00:29:09.029750 containerd[1591]: 2025-09-11 00:29:09.005 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ec201a3667 ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-eth0" Sep 11 00:29:09.029750 containerd[1591]: 2025-09-11 00:29:09.008 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-eth0" Sep 11 00:29:09.029851 containerd[1591]: 2025-09-11 00:29:09.008 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6jqts-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71c881f4-eafd-455f-8e99-5f408175a910", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0", Pod:"csi-node-driver-6jqts", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ec201a3667", MAC:"fa:98:eb:6a:ba:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:09.029927 containerd[1591]: 2025-09-11 00:29:09.018 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" Namespace="calico-system" Pod="csi-node-driver-6jqts" WorkloadEndpoint="localhost-k8s-csi--node--driver--6jqts-eth0" Sep 11 00:29:09.053102 containerd[1591]: time="2025-09-11T00:29:09.053047761Z" level=info msg="connecting to shim 3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0" address="unix:///run/containerd/s/37c08e49c7b34b2e2d975b999fe85e76dd81a29b5e3f38557cadb163eabfaed6" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:09.087799 systemd[1]: Started cri-containerd-3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0.scope - libcontainer container 3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0. Sep 11 00:29:09.102715 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:09.129051 containerd[1591]: time="2025-09-11T00:29:09.129012288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jqts,Uid:71c881f4-eafd-455f-8e99-5f408175a910,Namespace:calico-system,Attempt:0,} returns sandbox id \"3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0\"" Sep 11 00:29:09.898878 containerd[1591]: time="2025-09-11T00:29:09.898815386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lw4ks,Uid:e3ee72d0-c689-456b-a71d-0ec4de2705b5,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:09.899749 containerd[1591]: time="2025-09-11T00:29:09.898820465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-8fmjk,Uid:b3431f32-9912-44b2-854e-49665a06965b,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:10.067233 systemd-networkd[1487]: cali2eaf7119bfe: Link UP Sep 11 00:29:10.067905 systemd-networkd[1487]: cali2eaf7119bfe: Gained carrier Sep 11 00:29:10.080456 containerd[1591]: 2025-09-11 00:29:09.966 [INFO][4313] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:10.080456 containerd[1591]: 2025-09-11 00:29:09.982 [INFO][4313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0 calico-apiserver-5d877dc4f- calico-apiserver b3431f32-9912-44b2-854e-49665a06965b 820 0 2025-09-11 00:28:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d877dc4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d877dc4f-8fmjk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2eaf7119bfe [] [] }} ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-" Sep 11 00:29:10.080456 containerd[1591]: 2025-09-11 00:29:09.982 [INFO][4313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" Sep 11 00:29:10.080456 containerd[1591]: 2025-09-11 00:29:10.017 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" HandleID="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Workload="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.017 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" HandleID="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Workload="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d877dc4f-8fmjk", "timestamp":"2025-09-11 00:29:10.017027123 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.017 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.017 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.017 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.028 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" host="localhost" Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.033 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.038 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.041 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.043 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:10.080862 containerd[1591]: 2025-09-11 00:29:10.043 [INFO][4344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" host="localhost" Sep 11 00:29:10.081093 containerd[1591]: 2025-09-11 00:29:10.045 [INFO][4344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482 Sep 11 00:29:10.081093 containerd[1591]: 2025-09-11 00:29:10.051 [INFO][4344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" host="localhost" Sep 11 00:29:10.081093 containerd[1591]: 2025-09-11 00:29:10.057 [INFO][4344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" host="localhost" Sep 11 00:29:10.081093 containerd[1591]: 2025-09-11 00:29:10.058 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" host="localhost" Sep 11 00:29:10.081093 containerd[1591]: 2025-09-11 00:29:10.058 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:10.081093 containerd[1591]: 2025-09-11 00:29:10.058 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" HandleID="k8s-pod-network.97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Workload="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" Sep 11 00:29:10.081228 containerd[1591]: 2025-09-11 00:29:10.064 [INFO][4313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0", GenerateName:"calico-apiserver-5d877dc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3431f32-9912-44b2-854e-49665a06965b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d877dc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d877dc4f-8fmjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2eaf7119bfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:10.081281 containerd[1591]: 2025-09-11 00:29:10.065 [INFO][4313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" Sep 11 00:29:10.081281 containerd[1591]: 2025-09-11 00:29:10.065 [INFO][4313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2eaf7119bfe ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" Sep 11 00:29:10.081281 containerd[1591]: 2025-09-11 00:29:10.068 [INFO][4313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" Sep 11 00:29:10.081354 containerd[1591]: 2025-09-11 00:29:10.068 [INFO][4313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0", GenerateName:"calico-apiserver-5d877dc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3431f32-9912-44b2-854e-49665a06965b", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d877dc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482", Pod:"calico-apiserver-5d877dc4f-8fmjk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2eaf7119bfe", MAC:"86:eb:5c:ca:44:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:10.081412 containerd[1591]: 2025-09-11 00:29:10.077 [INFO][4313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-8fmjk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--8fmjk-eth0" Sep 11 00:29:10.131768 containerd[1591]: time="2025-09-11T00:29:10.131695846Z" level=info msg="connecting to shim 97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482" address="unix:///run/containerd/s/9a745f5db9b6f0103002481963549cd5450ef330d30c62b26eac230926df890c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:10.162856 systemd[1]: Started cri-containerd-97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482.scope - libcontainer container 97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482. Sep 11 00:29:10.178104 systemd-networkd[1487]: cali21b6ab38c05: Link UP Sep 11 00:29:10.185157 systemd-networkd[1487]: cali21b6ab38c05: Gained carrier Sep 11 00:29:10.186636 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:10.208887 containerd[1591]: 2025-09-11 00:29:09.980 [INFO][4315] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:10.208887 containerd[1591]: 2025-09-11 00:29:09.990 [INFO][4315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0 coredns-668d6bf9bc- kube-system e3ee72d0-c689-456b-a71d-0ec4de2705b5 810 0 2025-09-11 00:28:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lw4ks eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21b6ab38c05 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-" Sep 11 00:29:10.208887 containerd[1591]: 2025-09-11 00:29:09.991 [INFO][4315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" Sep 11 00:29:10.208887 containerd[1591]: 2025-09-11 00:29:10.026 [INFO][4350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" HandleID="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Workload="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.027 [INFO][4350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" HandleID="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Workload="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a53f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lw4ks", "timestamp":"2025-09-11 00:29:10.026840053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.027 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.058 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.059 [INFO][4350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.129 [INFO][4350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" host="localhost" Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.138 [INFO][4350] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.145 [INFO][4350] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.148 [INFO][4350] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.150 [INFO][4350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:10.209230 containerd[1591]: 2025-09-11 00:29:10.150 [INFO][4350] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" host="localhost" Sep 11 00:29:10.209522 containerd[1591]: 2025-09-11 00:29:10.152 [INFO][4350] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04 Sep 11 00:29:10.209522 containerd[1591]: 2025-09-11 00:29:10.157 [INFO][4350] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" host="localhost" Sep 11 00:29:10.209522 containerd[1591]: 2025-09-11 00:29:10.166 [INFO][4350] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" host="localhost" Sep 11 00:29:10.209522 containerd[1591]: 2025-09-11 00:29:10.166 [INFO][4350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" host="localhost" Sep 11 00:29:10.209522 containerd[1591]: 2025-09-11 00:29:10.166 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:10.209522 containerd[1591]: 2025-09-11 00:29:10.167 [INFO][4350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" HandleID="k8s-pod-network.65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Workload="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" Sep 11 00:29:10.209799 containerd[1591]: 2025-09-11 00:29:10.174 [INFO][4315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3ee72d0-c689-456b-a71d-0ec4de2705b5", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lw4ks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21b6ab38c05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:10.209985 containerd[1591]: 2025-09-11 00:29:10.174 [INFO][4315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" Sep 11 00:29:10.209985 containerd[1591]: 2025-09-11 00:29:10.174 [INFO][4315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21b6ab38c05 ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" Sep 11 00:29:10.209985 containerd[1591]: 2025-09-11 00:29:10.186 [INFO][4315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" Sep 11 00:29:10.210103 containerd[1591]: 2025-09-11 00:29:10.187 [INFO][4315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e3ee72d0-c689-456b-a71d-0ec4de2705b5", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04", Pod:"coredns-668d6bf9bc-lw4ks", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21b6ab38c05", MAC:"c6:1e:b3:4d:26:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:10.210103 containerd[1591]: 2025-09-11 00:29:10.199 [INFO][4315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" Namespace="kube-system" Pod="coredns-668d6bf9bc-lw4ks" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lw4ks-eth0" Sep 11 00:29:10.283734 containerd[1591]: time="2025-09-11T00:29:10.283650476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-8fmjk,Uid:b3431f32-9912-44b2-854e-49665a06965b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482\"" Sep 11 00:29:10.304718 containerd[1591]: time="2025-09-11T00:29:10.304639806Z" level=info msg="connecting to shim 65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04" address="unix:///run/containerd/s/e045431e2dcbf2003400c2b0aaaa43888c83fea49cbb764189206768e569dcb9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:10.333869 systemd[1]: Started cri-containerd-65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04.scope - libcontainer container 65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04. Sep 11 00:29:10.353336 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:10.392868 containerd[1591]: time="2025-09-11T00:29:10.392728858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lw4ks,Uid:e3ee72d0-c689-456b-a71d-0ec4de2705b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04\"" Sep 11 00:29:10.396126 containerd[1591]: time="2025-09-11T00:29:10.396097596Z" level=info msg="CreateContainer within sandbox \"65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:29:10.411949 containerd[1591]: time="2025-09-11T00:29:10.411896938Z" level=info msg="Container 8ad05e567b5ae467576eca015e2d480ff04ea681131e84e6f1c14defa980dbd9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:10.419260 containerd[1591]: time="2025-09-11T00:29:10.418445598Z" level=info msg="CreateContainer within sandbox \"65541cc3014270db204cbc0a4438c02e7ad733566edef8b5d1f831eb0697ef04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ad05e567b5ae467576eca015e2d480ff04ea681131e84e6f1c14defa980dbd9\"" Sep 11 00:29:10.419260 containerd[1591]: time="2025-09-11T00:29:10.419209623Z" level=info msg="StartContainer for \"8ad05e567b5ae467576eca015e2d480ff04ea681131e84e6f1c14defa980dbd9\"" Sep 11 00:29:10.420545 containerd[1591]: time="2025-09-11T00:29:10.420509203Z" level=info msg="connecting to shim 8ad05e567b5ae467576eca015e2d480ff04ea681131e84e6f1c14defa980dbd9" address="unix:///run/containerd/s/e045431e2dcbf2003400c2b0aaaa43888c83fea49cbb764189206768e569dcb9" protocol=ttrpc version=3 Sep 11 00:29:10.452013 systemd[1]: Started cri-containerd-8ad05e567b5ae467576eca015e2d480ff04ea681131e84e6f1c14defa980dbd9.scope - libcontainer container 8ad05e567b5ae467576eca015e2d480ff04ea681131e84e6f1c14defa980dbd9. Sep 11 00:29:10.550165 containerd[1591]: time="2025-09-11T00:29:10.550106704Z" level=info msg="StartContainer for \"8ad05e567b5ae467576eca015e2d480ff04ea681131e84e6f1c14defa980dbd9\" returns successfully" Sep 11 00:29:10.552403 containerd[1591]: time="2025-09-11T00:29:10.552348113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:10.553316 containerd[1591]: time="2025-09-11T00:29:10.553271718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:29:10.554700 containerd[1591]: time="2025-09-11T00:29:10.554628566Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:10.557112 containerd[1591]: time="2025-09-11T00:29:10.557079890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:10.557934 containerd[1591]: time="2025-09-11T00:29:10.557891726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.033051242s" Sep 11 00:29:10.557981 containerd[1591]: time="2025-09-11T00:29:10.557947079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:29:10.559413 containerd[1591]: time="2025-09-11T00:29:10.559363730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:29:10.560883 containerd[1591]: time="2025-09-11T00:29:10.560391260Z" level=info msg="CreateContainer within sandbox \"535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:29:10.569636 containerd[1591]: time="2025-09-11T00:29:10.569586980Z" level=info msg="Container c505c6e6c991d89bdadb6a45686d55fa62e6a523975f287d40f94de761d5c87a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:10.578024 containerd[1591]: time="2025-09-11T00:29:10.577979593Z" level=info msg="CreateContainer within sandbox \"535de9d0f6084749b1cdfb815ab82d237c5e9a6464f74075a93384d92d91948e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c505c6e6c991d89bdadb6a45686d55fa62e6a523975f287d40f94de761d5c87a\"" Sep 11 00:29:10.578480 containerd[1591]: time="2025-09-11T00:29:10.578457460Z" level=info msg="StartContainer for \"c505c6e6c991d89bdadb6a45686d55fa62e6a523975f287d40f94de761d5c87a\"" Sep 11 00:29:10.579510 containerd[1591]: time="2025-09-11T00:29:10.579488667Z" level=info msg="connecting to shim c505c6e6c991d89bdadb6a45686d55fa62e6a523975f287d40f94de761d5c87a" address="unix:///run/containerd/s/9643b22e86d031d630df9822ea301115d2d66cd374305529a1a3c19cae68fc49" protocol=ttrpc version=3 Sep 11 00:29:10.604837 systemd[1]: Started cri-containerd-c505c6e6c991d89bdadb6a45686d55fa62e6a523975f287d40f94de761d5c87a.scope - libcontainer container c505c6e6c991d89bdadb6a45686d55fa62e6a523975f287d40f94de761d5c87a. Sep 11 00:29:10.665699 containerd[1591]: time="2025-09-11T00:29:10.665254386Z" level=info msg="StartContainer for \"c505c6e6c991d89bdadb6a45686d55fa62e6a523975f287d40f94de761d5c87a\" returns successfully" Sep 11 00:29:10.940209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1760842773.mount: Deactivated successfully. Sep 11 00:29:11.046868 systemd-networkd[1487]: cali8ec201a3667: Gained IPv6LL Sep 11 00:29:11.444085 kubelet[2711]: I0911 00:29:11.443765 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57f987cb84-zcc2x" podStartSLOduration=1.814229703 podStartE2EDuration="6.443727349s" podCreationTimestamp="2025-09-11 00:29:05 +0000 UTC" firstStartedPulling="2025-09-11 00:29:05.92918767 +0000 UTC m=+35.115538564" lastFinishedPulling="2025-09-11 00:29:10.558685316 +0000 UTC m=+39.745036210" observedRunningTime="2025-09-11 00:29:11.336424473 +0000 UTC m=+40.522775457" watchObservedRunningTime="2025-09-11 00:29:11.443727349 +0000 UTC m=+40.630078243" Sep 11 00:29:11.446623 kubelet[2711]: I0911 00:29:11.444207 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lw4ks" podStartSLOduration=34.444200949 podStartE2EDuration="34.444200949s" podCreationTimestamp="2025-09-11 00:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:11.442926526 +0000 UTC m=+40.629277420" watchObservedRunningTime="2025-09-11 00:29:11.444200949 +0000 UTC m=+40.630551843" Sep 11 00:29:11.493848 systemd-networkd[1487]: cali2eaf7119bfe: Gained IPv6LL Sep 11 00:29:11.898300 containerd[1591]: time="2025-09-11T00:29:11.898261152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxwqr,Uid:6f56fe66-3e02-4c5d-b9e7-77afeda29f26,Namespace:kube-system,Attempt:0,}" Sep 11 00:29:11.898769 containerd[1591]: time="2025-09-11T00:29:11.898314943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686957df88-qdb9n,Uid:e829cab8-e382-4993-903a-ff19cd566794,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:12.152503 systemd-networkd[1487]: cali7cc909782c2: Link UP Sep 11 00:29:12.152848 systemd-networkd[1487]: cali7cc909782c2: Gained carrier Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.077 [INFO][4597] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.090 [INFO][4597] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0 coredns-668d6bf9bc- kube-system 6f56fe66-3e02-4c5d-b9e7-77afeda29f26 822 0 2025-09-11 00:28:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-zxwqr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7cc909782c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.090 [INFO][4597] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.116 [INFO][4625] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" HandleID="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Workload="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.117 [INFO][4625] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" HandleID="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Workload="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c71b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-zxwqr", "timestamp":"2025-09-11 00:29:12.116892486 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.117 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.117 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.117 [INFO][4625] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.124 [INFO][4625] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.128 [INFO][4625] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.132 [INFO][4625] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.133 [INFO][4625] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.136 [INFO][4625] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.136 [INFO][4625] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.138 [INFO][4625] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880 Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.142 [INFO][4625] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.147 [INFO][4625] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.147 [INFO][4625] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" host="localhost" Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.147 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:12.166939 containerd[1591]: 2025-09-11 00:29:12.147 [INFO][4625] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" HandleID="k8s-pod-network.8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Workload="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" Sep 11 00:29:12.167753 containerd[1591]: 2025-09-11 00:29:12.150 [INFO][4597] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6f56fe66-3e02-4c5d-b9e7-77afeda29f26", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-zxwqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cc909782c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:12.167753 containerd[1591]: 2025-09-11 00:29:12.150 [INFO][4597] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" Sep 11 00:29:12.167753 containerd[1591]: 2025-09-11 00:29:12.150 [INFO][4597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cc909782c2 ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" Sep 11 00:29:12.167753 containerd[1591]: 2025-09-11 00:29:12.153 [INFO][4597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" Sep 11 00:29:12.167753 containerd[1591]: 2025-09-11 00:29:12.153 [INFO][4597] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6f56fe66-3e02-4c5d-b9e7-77afeda29f26", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880", Pod:"coredns-668d6bf9bc-zxwqr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cc909782c2", MAC:"d2:d0:cd:5e:6e:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:12.167753 containerd[1591]: 2025-09-11 00:29:12.163 [INFO][4597] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" Namespace="kube-system" Pod="coredns-668d6bf9bc-zxwqr" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--zxwqr-eth0" Sep 11 00:29:12.191650 containerd[1591]: time="2025-09-11T00:29:12.191594999Z" level=info msg="connecting to shim 8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880" address="unix:///run/containerd/s/641053cdfa97c809131f8500fef454b6d3e5af7aaa7c1c49823c2d3c392cbe60" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:12.226036 systemd[1]: Started cri-containerd-8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880.scope - libcontainer container 8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880. Sep 11 00:29:12.243737 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:12.263195 systemd-networkd[1487]: cali21b6ab38c05: Gained IPv6LL Sep 11 00:29:12.267213 systemd-networkd[1487]: cali66b3dd9b5b6: Link UP Sep 11 00:29:12.267752 systemd-networkd[1487]: cali66b3dd9b5b6: Gained carrier Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.081 [INFO][4598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.093 [INFO][4598] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0 calico-kube-controllers-686957df88- calico-system e829cab8-e382-4993-903a-ff19cd566794 814 0 2025-09-11 00:28:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:686957df88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-686957df88-qdb9n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali66b3dd9b5b6 [] [] }} ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.093 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.124 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" HandleID="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Workload="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.125 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" HandleID="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Workload="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-686957df88-qdb9n", "timestamp":"2025-09-11 00:29:12.124886176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.125 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.147 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.147 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.225 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.230 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.236 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.238 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.241 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.241 [INFO][4631] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.244 [INFO][4631] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.249 [INFO][4631] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.256 [INFO][4631] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.256 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" host="localhost" Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.256 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:12.285533 containerd[1591]: 2025-09-11 00:29:12.256 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" HandleID="k8s-pod-network.a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Workload="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" Sep 11 00:29:12.286197 containerd[1591]: 2025-09-11 00:29:12.260 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0", GenerateName:"calico-kube-controllers-686957df88-", Namespace:"calico-system", SelfLink:"", UID:"e829cab8-e382-4993-903a-ff19cd566794", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686957df88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-686957df88-qdb9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali66b3dd9b5b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:12.286197 containerd[1591]: 2025-09-11 00:29:12.260 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" Sep 11 00:29:12.286197 containerd[1591]: 2025-09-11 00:29:12.260 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66b3dd9b5b6 ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" Sep 11 00:29:12.286197 containerd[1591]: 2025-09-11 00:29:12.267 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" Sep 11 00:29:12.286197 containerd[1591]: 2025-09-11 00:29:12.268 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0", GenerateName:"calico-kube-controllers-686957df88-", Namespace:"calico-system", SelfLink:"", UID:"e829cab8-e382-4993-903a-ff19cd566794", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"686957df88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e", Pod:"calico-kube-controllers-686957df88-qdb9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali66b3dd9b5b6", MAC:"6e:c0:ff:e0:1b:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:12.286197 containerd[1591]: 2025-09-11 00:29:12.279 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" Namespace="calico-system" Pod="calico-kube-controllers-686957df88-qdb9n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--686957df88--qdb9n-eth0" Sep 11 00:29:12.294083 containerd[1591]: time="2025-09-11T00:29:12.294044253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zxwqr,Uid:6f56fe66-3e02-4c5d-b9e7-77afeda29f26,Namespace:kube-system,Attempt:0,} returns sandbox id \"8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880\"" Sep 11 00:29:12.297270 containerd[1591]: time="2025-09-11T00:29:12.297215917Z" level=info msg="CreateContainer within sandbox \"8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:29:12.313036 containerd[1591]: time="2025-09-11T00:29:12.312979097Z" level=info msg="Container ba295850deae0eaa35627606c4d34e1d48ac92ecf67ff240a8f3f5edef314482: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:12.315748 containerd[1591]: time="2025-09-11T00:29:12.315720785Z" level=info msg="connecting to shim a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e" address="unix:///run/containerd/s/273aed1f6f7e88918ccf54603c4fea1decd35550235028e3d3398568bd018af0" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:12.320546 containerd[1591]: time="2025-09-11T00:29:12.320504859Z" level=info msg="CreateContainer within sandbox \"8427fd8485eb60f782b07a708bd83a01fd339941c938456e56fa9bbbd0c07880\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba295850deae0eaa35627606c4d34e1d48ac92ecf67ff240a8f3f5edef314482\"" Sep 11 00:29:12.321251 containerd[1591]: time="2025-09-11T00:29:12.321226574Z" level=info msg="StartContainer for \"ba295850deae0eaa35627606c4d34e1d48ac92ecf67ff240a8f3f5edef314482\"" Sep 11 00:29:12.322106 containerd[1591]: time="2025-09-11T00:29:12.322076039Z" level=info msg="connecting to shim ba295850deae0eaa35627606c4d34e1d48ac92ecf67ff240a8f3f5edef314482" address="unix:///run/containerd/s/641053cdfa97c809131f8500fef454b6d3e5af7aaa7c1c49823c2d3c392cbe60" protocol=ttrpc version=3 Sep 11 00:29:12.348902 systemd[1]: Started cri-containerd-ba295850deae0eaa35627606c4d34e1d48ac92ecf67ff240a8f3f5edef314482.scope - libcontainer container ba295850deae0eaa35627606c4d34e1d48ac92ecf67ff240a8f3f5edef314482. Sep 11 00:29:12.361194 systemd[1]: Started cri-containerd-a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e.scope - libcontainer container a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e. Sep 11 00:29:12.386791 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:12.397147 containerd[1591]: time="2025-09-11T00:29:12.397092621Z" level=info msg="StartContainer for \"ba295850deae0eaa35627606c4d34e1d48ac92ecf67ff240a8f3f5edef314482\" returns successfully" Sep 11 00:29:12.429286 containerd[1591]: time="2025-09-11T00:29:12.428745107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-686957df88-qdb9n,Uid:e829cab8-e382-4993-903a-ff19cd566794,Namespace:calico-system,Attempt:0,} returns sandbox id \"a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e\"" Sep 11 00:29:12.533848 containerd[1591]: time="2025-09-11T00:29:12.533767773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:12.534742 containerd[1591]: time="2025-09-11T00:29:12.534715973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:29:12.536146 containerd[1591]: time="2025-09-11T00:29:12.536119007Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:12.538340 containerd[1591]: time="2025-09-11T00:29:12.538290405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:12.538926 containerd[1591]: time="2025-09-11T00:29:12.538884751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.979483612s" Sep 11 00:29:12.538926 containerd[1591]: time="2025-09-11T00:29:12.538914898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:29:12.540241 containerd[1591]: time="2025-09-11T00:29:12.540214157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:29:12.546947 containerd[1591]: time="2025-09-11T00:29:12.546891245Z" level=info msg="CreateContainer within sandbox \"3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:29:12.574995 containerd[1591]: time="2025-09-11T00:29:12.574943270Z" level=info msg="Container 1ac03160c3fea48ea0c36b4a0b42ac4670fd7165897fce87c18a0195cd987f34: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:12.583226 containerd[1591]: time="2025-09-11T00:29:12.583181038Z" level=info msg="CreateContainer within sandbox \"3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1ac03160c3fea48ea0c36b4a0b42ac4670fd7165897fce87c18a0195cd987f34\"" Sep 11 00:29:12.584600 containerd[1591]: time="2025-09-11T00:29:12.583751760Z" level=info msg="StartContainer for \"1ac03160c3fea48ea0c36b4a0b42ac4670fd7165897fce87c18a0195cd987f34\"" Sep 11 00:29:12.585935 containerd[1591]: time="2025-09-11T00:29:12.585860780Z" level=info msg="connecting to shim 1ac03160c3fea48ea0c36b4a0b42ac4670fd7165897fce87c18a0195cd987f34" address="unix:///run/containerd/s/37c08e49c7b34b2e2d975b999fe85e76dd81a29b5e3f38557cadb163eabfaed6" protocol=ttrpc version=3 Sep 11 00:29:12.609816 systemd[1]: Started cri-containerd-1ac03160c3fea48ea0c36b4a0b42ac4670fd7165897fce87c18a0195cd987f34.scope - libcontainer container 1ac03160c3fea48ea0c36b4a0b42ac4670fd7165897fce87c18a0195cd987f34. Sep 11 00:29:12.656405 containerd[1591]: time="2025-09-11T00:29:12.656348759Z" level=info msg="StartContainer for \"1ac03160c3fea48ea0c36b4a0b42ac4670fd7165897fce87c18a0195cd987f34\" returns successfully" Sep 11 00:29:12.899626 containerd[1591]: time="2025-09-11T00:29:12.899263101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-2qzk9,Uid:4d9eab99-910e-4c64-99d1-bf4648e7e6e1,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:29:12.900493 containerd[1591]: time="2025-09-11T00:29:12.899587209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gpd4x,Uid:a5119758-2a9a-4f64-84be-5f6700dc2598,Namespace:calico-system,Attempt:0,}" Sep 11 00:29:12.922584 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:42230.service - OpenSSH per-connection server daemon (10.0.0.1:42230). Sep 11 00:29:12.997250 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 42230 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:13.000451 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:13.006981 systemd-logind[1565]: New session 8 of user core. Sep 11 00:29:13.011821 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:29:13.074788 systemd-networkd[1487]: cali316cc24b636: Link UP Sep 11 00:29:13.075961 systemd-networkd[1487]: cali316cc24b636: Gained carrier Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:12.977 [INFO][4840] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:12.991 [INFO][4840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--gpd4x-eth0 goldmane-54d579b49d- calico-system a5119758-2a9a-4f64-84be-5f6700dc2598 821 0 2025-09-11 00:28:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-gpd4x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali316cc24b636 [] [] }} ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:12.991 [INFO][4840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.024 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" HandleID="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Workload="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.024 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" HandleID="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Workload="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-gpd4x", "timestamp":"2025-09-11 00:29:13.024127431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.024 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.024 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.024 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.031 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.038 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.042 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.043 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.045 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.045 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.050 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.056 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.062 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.062 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" host="localhost" Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.062 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:13.093347 containerd[1591]: 2025-09-11 00:29:13.063 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" HandleID="k8s-pod-network.6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Workload="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" Sep 11 00:29:13.093927 containerd[1591]: 2025-09-11 00:29:13.068 [INFO][4840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--gpd4x-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a5119758-2a9a-4f64-84be-5f6700dc2598", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-gpd4x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali316cc24b636", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:13.093927 containerd[1591]: 2025-09-11 00:29:13.069 [INFO][4840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" Sep 11 00:29:13.093927 containerd[1591]: 2025-09-11 00:29:13.069 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali316cc24b636 ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" Sep 11 00:29:13.093927 containerd[1591]: 2025-09-11 00:29:13.075 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" Sep 11 00:29:13.093927 containerd[1591]: 2025-09-11 00:29:13.076 [INFO][4840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--gpd4x-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"a5119758-2a9a-4f64-84be-5f6700dc2598", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa", Pod:"goldmane-54d579b49d-gpd4x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali316cc24b636", MAC:"ce:91:ac:7c:1c:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:13.093927 containerd[1591]: 2025-09-11 00:29:13.089 [INFO][4840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" Namespace="calico-system" Pod="goldmane-54d579b49d-gpd4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gpd4x-eth0" Sep 11 00:29:13.127634 containerd[1591]: time="2025-09-11T00:29:13.127579099Z" level=info msg="connecting to shim 6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa" address="unix:///run/containerd/s/85f10d15424facc70d10a164672bcbe0a5706048933080ce20e56a99af09dc58" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:13.159795 systemd[1]: Started cri-containerd-6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa.scope - libcontainer container 6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa. Sep 11 00:29:13.181649 systemd-networkd[1487]: cali7aad6e67849: Link UP Sep 11 00:29:13.182124 systemd-networkd[1487]: cali7aad6e67849: Gained carrier Sep 11 00:29:13.196803 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:13.224792 sshd[4884]: Connection closed by 10.0.0.1 port 42230 Sep 11 00:29:13.223894 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:13.225195 kubelet[2711]: I0911 00:29:13.223768 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zxwqr" podStartSLOduration=36.223744627 podStartE2EDuration="36.223744627s" podCreationTimestamp="2025-09-11 00:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:29:13.2193734 +0000 UTC m=+42.405724284" watchObservedRunningTime="2025-09-11 00:29:13.223744627 +0000 UTC m=+42.410095521" Sep 11 00:29:13.232780 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:42230.service: Deactivated successfully. Sep 11 00:29:13.234039 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:12.976 [INFO][4858] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:12.993 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0 calico-apiserver-5d877dc4f- calico-apiserver 4d9eab99-910e-4c64-99d1-bf4648e7e6e1 817 0 2025-09-11 00:28:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d877dc4f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d877dc4f-2qzk9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7aad6e67849 [] [] }} ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:12.993 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.027 [INFO][4873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" HandleID="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Workload="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.027 [INFO][4873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" HandleID="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Workload="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d877dc4f-2qzk9", "timestamp":"2025-09-11 00:29:13.027083841 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.027 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.063 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.063 [INFO][4873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.132 [INFO][4873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.138 [INFO][4873] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.142 [INFO][4873] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.144 [INFO][4873] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.148 [INFO][4873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.148 [INFO][4873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.149 [INFO][4873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.153 [INFO][4873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.165 [INFO][4873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.165 [INFO][4873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" host="localhost" Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.166 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:29:13.235407 containerd[1591]: 2025-09-11 00:29:13.166 [INFO][4873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" HandleID="k8s-pod-network.c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Workload="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" Sep 11 00:29:13.235883 containerd[1591]: 2025-09-11 00:29:13.173 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0", GenerateName:"calico-apiserver-5d877dc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d9eab99-910e-4c64-99d1-bf4648e7e6e1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d877dc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d877dc4f-2qzk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7aad6e67849", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:13.235883 containerd[1591]: 2025-09-11 00:29:13.175 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" Sep 11 00:29:13.235883 containerd[1591]: 2025-09-11 00:29:13.175 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7aad6e67849 ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" Sep 11 00:29:13.235883 containerd[1591]: 2025-09-11 00:29:13.188 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" Sep 11 00:29:13.235883 containerd[1591]: 2025-09-11 00:29:13.189 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0", GenerateName:"calico-apiserver-5d877dc4f-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d9eab99-910e-4c64-99d1-bf4648e7e6e1", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 28, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d877dc4f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d", Pod:"calico-apiserver-5d877dc4f-2qzk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7aad6e67849", MAC:"2a:82:16:3b:3e:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:29:13.235883 containerd[1591]: 2025-09-11 00:29:13.215 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" Namespace="calico-apiserver" Pod="calico-apiserver-5d877dc4f-2qzk9" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d877dc4f--2qzk9-eth0" Sep 11 00:29:13.238100 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:29:13.240992 systemd-logind[1565]: Removed session 8. Sep 11 00:29:13.295939 containerd[1591]: time="2025-09-11T00:29:13.295867503Z" level=info msg="connecting to shim c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d" address="unix:///run/containerd/s/f630c43e7c32292ac8f7423fa3834c748222c2464867c7a2dce17eedfa2a537d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:29:13.305655 containerd[1591]: time="2025-09-11T00:29:13.305614403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gpd4x,Uid:a5119758-2a9a-4f64-84be-5f6700dc2598,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa\"" Sep 11 00:29:13.340867 systemd[1]: Started cri-containerd-c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d.scope - libcontainer container c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d. Sep 11 00:29:13.350039 systemd-networkd[1487]: cali66b3dd9b5b6: Gained IPv6LL Sep 11 00:29:13.356890 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:29:13.392433 containerd[1591]: time="2025-09-11T00:29:13.392380814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d877dc4f-2qzk9,Uid:4d9eab99-910e-4c64-99d1-bf4648e7e6e1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d\"" Sep 11 00:29:13.605866 systemd-networkd[1487]: cali7cc909782c2: Gained IPv6LL Sep 11 00:29:14.641778 kubelet[2711]: I0911 00:29:14.641582 2711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:14.757842 systemd-networkd[1487]: cali316cc24b636: Gained IPv6LL Sep 11 00:29:14.822004 systemd-networkd[1487]: cali7aad6e67849: Gained IPv6LL Sep 11 00:29:15.072229 containerd[1591]: time="2025-09-11T00:29:15.071771765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:15.077745 containerd[1591]: time="2025-09-11T00:29:15.077635633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:29:15.081682 containerd[1591]: time="2025-09-11T00:29:15.079871951Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:15.086492 containerd[1591]: time="2025-09-11T00:29:15.086421386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:15.087094 containerd[1591]: time="2025-09-11T00:29:15.087056098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 2.546807976s" Sep 11 00:29:15.087145 containerd[1591]: time="2025-09-11T00:29:15.087109147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:29:15.089941 containerd[1591]: time="2025-09-11T00:29:15.089893143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:29:15.092260 containerd[1591]: time="2025-09-11T00:29:15.091762984Z" level=info msg="CreateContainer within sandbox \"97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:29:15.112335 containerd[1591]: time="2025-09-11T00:29:15.111718045Z" level=info msg="Container ac72f02992c1cbd8eb8e46dc11e877f88e6cde6d306ec005ee342b204461db0f: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:15.122758 containerd[1591]: time="2025-09-11T00:29:15.122555298Z" level=info msg="CreateContainer within sandbox \"97d6ee38a2941aed6f9d8dfcd2ce6b009aba738478bdeb960cef2fc76990f482\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ac72f02992c1cbd8eb8e46dc11e877f88e6cde6d306ec005ee342b204461db0f\"" Sep 11 00:29:15.124194 containerd[1591]: time="2025-09-11T00:29:15.124056698Z" level=info msg="StartContainer for \"ac72f02992c1cbd8eb8e46dc11e877f88e6cde6d306ec005ee342b204461db0f\"" Sep 11 00:29:15.125374 containerd[1591]: time="2025-09-11T00:29:15.125337682Z" level=info msg="connecting to shim ac72f02992c1cbd8eb8e46dc11e877f88e6cde6d306ec005ee342b204461db0f" address="unix:///run/containerd/s/9a745f5db9b6f0103002481963549cd5450ef330d30c62b26eac230926df890c" protocol=ttrpc version=3 Sep 11 00:29:15.148876 systemd[1]: Started cri-containerd-ac72f02992c1cbd8eb8e46dc11e877f88e6cde6d306ec005ee342b204461db0f.scope - libcontainer container ac72f02992c1cbd8eb8e46dc11e877f88e6cde6d306ec005ee342b204461db0f. Sep 11 00:29:15.242009 containerd[1591]: time="2025-09-11T00:29:15.241922102Z" level=info msg="StartContainer for \"ac72f02992c1cbd8eb8e46dc11e877f88e6cde6d306ec005ee342b204461db0f\" returns successfully" Sep 11 00:29:15.278481 systemd-networkd[1487]: vxlan.calico: Link UP Sep 11 00:29:15.279172 systemd-networkd[1487]: vxlan.calico: Gained carrier Sep 11 00:29:16.997896 systemd-networkd[1487]: vxlan.calico: Gained IPv6LL Sep 11 00:29:17.214528 kubelet[2711]: I0911 00:29:17.214472 2711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:18.234819 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:42246.service - OpenSSH per-connection server daemon (10.0.0.1:42246). Sep 11 00:29:18.306893 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 42246 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:18.309911 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:18.315972 systemd-logind[1565]: New session 9 of user core. Sep 11 00:29:18.326924 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:29:18.478926 sshd[5240]: Connection closed by 10.0.0.1 port 42246 Sep 11 00:29:18.479594 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:18.483943 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:29:18.484294 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:42246.service: Deactivated successfully. Sep 11 00:29:18.487447 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:29:18.492847 systemd-logind[1565]: Removed session 9. Sep 11 00:29:18.875820 containerd[1591]: time="2025-09-11T00:29:18.875628534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:18.876533 containerd[1591]: time="2025-09-11T00:29:18.876470775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:29:18.877918 containerd[1591]: time="2025-09-11T00:29:18.877866014Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:18.880268 containerd[1591]: time="2025-09-11T00:29:18.880215955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:18.880894 containerd[1591]: time="2025-09-11T00:29:18.880860123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.790917318s" Sep 11 00:29:18.880894 containerd[1591]: time="2025-09-11T00:29:18.880893346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:29:18.882358 containerd[1591]: time="2025-09-11T00:29:18.882261082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:29:18.892637 containerd[1591]: time="2025-09-11T00:29:18.892585280Z" level=info msg="CreateContainer within sandbox \"a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:29:18.902149 containerd[1591]: time="2025-09-11T00:29:18.902112931Z" level=info msg="Container 6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:18.916364 containerd[1591]: time="2025-09-11T00:29:18.916307593Z" level=info msg="CreateContainer within sandbox \"a09180ab338185c69ba87562ec0ad0a0797a18603d8dd7324839656675b3513e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4\"" Sep 11 00:29:18.917629 containerd[1591]: time="2025-09-11T00:29:18.916739334Z" level=info msg="StartContainer for \"6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4\"" Sep 11 00:29:18.917745 containerd[1591]: time="2025-09-11T00:29:18.917707992Z" level=info msg="connecting to shim 6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4" address="unix:///run/containerd/s/273aed1f6f7e88918ccf54603c4fea1decd35550235028e3d3398568bd018af0" protocol=ttrpc version=3 Sep 11 00:29:18.970067 systemd[1]: Started cri-containerd-6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4.scope - libcontainer container 6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4. Sep 11 00:29:19.124116 containerd[1591]: time="2025-09-11T00:29:19.124055484Z" level=info msg="StartContainer for \"6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4\" returns successfully" Sep 11 00:29:19.265443 containerd[1591]: time="2025-09-11T00:29:19.265395424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4\" id:\"2b6a12e5159ec537d85638a7f8f092c7f3be1ad45f986301d0802117fdbf6352\" pid:5314 exit_status:1 exited_at:{seconds:1757550559 nanos:264305709}" Sep 11 00:29:19.355629 kubelet[2711]: I0911 00:29:19.355550 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-686957df88-qdb9n" podStartSLOduration=24.9047736 podStartE2EDuration="31.355529783s" podCreationTimestamp="2025-09-11 00:28:48 +0000 UTC" firstStartedPulling="2025-09-11 00:29:12.43136192 +0000 UTC m=+41.617712815" lastFinishedPulling="2025-09-11 00:29:18.882118104 +0000 UTC m=+48.068468998" observedRunningTime="2025-09-11 00:29:19.347719655 +0000 UTC m=+48.534070549" watchObservedRunningTime="2025-09-11 00:29:19.355529783 +0000 UTC m=+48.541880677" Sep 11 00:29:19.356157 kubelet[2711]: I0911 00:29:19.355894 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d877dc4f-8fmjk" podStartSLOduration=29.552000659 podStartE2EDuration="34.355885961s" podCreationTimestamp="2025-09-11 00:28:45 +0000 UTC" firstStartedPulling="2025-09-11 00:29:10.285202119 +0000 UTC m=+39.471553013" lastFinishedPulling="2025-09-11 00:29:15.089087421 +0000 UTC m=+44.275438315" observedRunningTime="2025-09-11 00:29:16.301083004 +0000 UTC m=+45.487433898" watchObservedRunningTime="2025-09-11 00:29:19.355885961 +0000 UTC m=+48.542236855" Sep 11 00:29:20.267895 containerd[1591]: time="2025-09-11T00:29:20.267695751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4\" id:\"c10b40c235078979f32f2150b2b3a4dc71b0374c251c31d9c8f450eaa60b0acd\" pid:5337 exited_at:{seconds:1757550560 nanos:267403122}" Sep 11 00:29:20.839741 containerd[1591]: time="2025-09-11T00:29:20.839658790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:20.841032 containerd[1591]: time="2025-09-11T00:29:20.840787367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:29:20.842983 containerd[1591]: time="2025-09-11T00:29:20.842941230Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:20.857605 containerd[1591]: time="2025-09-11T00:29:20.857534867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:20.857975 containerd[1591]: time="2025-09-11T00:29:20.857939366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.975648117s" Sep 11 00:29:20.858054 containerd[1591]: time="2025-09-11T00:29:20.857980654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:29:20.859433 containerd[1591]: time="2025-09-11T00:29:20.859060800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:29:20.861036 containerd[1591]: time="2025-09-11T00:29:20.861010199Z" level=info msg="CreateContainer within sandbox \"3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:29:20.875694 containerd[1591]: time="2025-09-11T00:29:20.873330820Z" level=info msg="Container b6a68a27540f0e959d71b976ffd4d844e4c1827e03ab849cfd2633027ab46e64: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:20.886147 containerd[1591]: time="2025-09-11T00:29:20.886086789Z" level=info msg="CreateContainer within sandbox \"3642d87862b54aef783306115ab082585357e370f578e761aed11958666435c0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b6a68a27540f0e959d71b976ffd4d844e4c1827e03ab849cfd2633027ab46e64\"" Sep 11 00:29:20.886938 containerd[1591]: time="2025-09-11T00:29:20.886878474Z" level=info msg="StartContainer for \"b6a68a27540f0e959d71b976ffd4d844e4c1827e03ab849cfd2633027ab46e64\"" Sep 11 00:29:20.888775 containerd[1591]: time="2025-09-11T00:29:20.888729768Z" level=info msg="connecting to shim b6a68a27540f0e959d71b976ffd4d844e4c1827e03ab849cfd2633027ab46e64" address="unix:///run/containerd/s/37c08e49c7b34b2e2d975b999fe85e76dd81a29b5e3f38557cadb163eabfaed6" protocol=ttrpc version=3 Sep 11 00:29:20.926050 systemd[1]: Started cri-containerd-b6a68a27540f0e959d71b976ffd4d844e4c1827e03ab849cfd2633027ab46e64.scope - libcontainer container b6a68a27540f0e959d71b976ffd4d844e4c1827e03ab849cfd2633027ab46e64. Sep 11 00:29:20.985022 containerd[1591]: time="2025-09-11T00:29:20.984970357Z" level=info msg="StartContainer for \"b6a68a27540f0e959d71b976ffd4d844e4c1827e03ab849cfd2633027ab46e64\" returns successfully" Sep 11 00:29:21.239509 kubelet[2711]: I0911 00:29:21.239426 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6jqts" podStartSLOduration=21.510704384 podStartE2EDuration="33.239409293s" podCreationTimestamp="2025-09-11 00:28:48 +0000 UTC" firstStartedPulling="2025-09-11 00:29:09.130246947 +0000 UTC m=+38.316597841" lastFinishedPulling="2025-09-11 00:29:20.858951856 +0000 UTC m=+50.045302750" observedRunningTime="2025-09-11 00:29:21.238393146 +0000 UTC m=+50.424744040" watchObservedRunningTime="2025-09-11 00:29:21.239409293 +0000 UTC m=+50.425760187" Sep 11 00:29:21.971780 kubelet[2711]: I0911 00:29:21.971721 2711 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:29:21.971780 kubelet[2711]: I0911 00:29:21.971795 2711 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:29:23.494231 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:57860.service - OpenSSH per-connection server daemon (10.0.0.1:57860). Sep 11 00:29:23.550604 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 57860 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:23.554554 sshd-session[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:23.561836 systemd-logind[1565]: New session 10 of user core. Sep 11 00:29:23.571798 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:29:23.770379 sshd[5402]: Connection closed by 10.0.0.1 port 57860 Sep 11 00:29:23.770716 sshd-session[5400]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:23.776698 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:57860.service: Deactivated successfully. Sep 11 00:29:23.780136 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:29:23.781583 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:29:23.782938 systemd-logind[1565]: Removed session 10. Sep 11 00:29:23.842334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741420291.mount: Deactivated successfully. Sep 11 00:29:24.317897 containerd[1591]: time="2025-09-11T00:29:24.317848691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:24.318761 containerd[1591]: time="2025-09-11T00:29:24.318700229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:29:24.320062 containerd[1591]: time="2025-09-11T00:29:24.320012261Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:24.322110 containerd[1591]: time="2025-09-11T00:29:24.322066716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:24.323107 containerd[1591]: time="2025-09-11T00:29:24.323035143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.463946019s" Sep 11 00:29:24.323107 containerd[1591]: time="2025-09-11T00:29:24.323076300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:29:24.325096 containerd[1591]: time="2025-09-11T00:29:24.325060382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:29:24.325845 containerd[1591]: time="2025-09-11T00:29:24.325818825Z" level=info msg="CreateContainer within sandbox \"6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:29:24.340690 containerd[1591]: time="2025-09-11T00:29:24.338692500Z" level=info msg="Container 86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:24.348691 containerd[1591]: time="2025-09-11T00:29:24.348623633Z" level=info msg="CreateContainer within sandbox \"6b879e162b91ac0707e2798c5b857e05da21cb78f5ce7373aa6ed45c184171aa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7\"" Sep 11 00:29:24.349647 containerd[1591]: time="2025-09-11T00:29:24.349418975Z" level=info msg="StartContainer for \"86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7\"" Sep 11 00:29:24.350740 containerd[1591]: time="2025-09-11T00:29:24.350710889Z" level=info msg="connecting to shim 86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7" address="unix:///run/containerd/s/85f10d15424facc70d10a164672bcbe0a5706048933080ce20e56a99af09dc58" protocol=ttrpc version=3 Sep 11 00:29:24.378919 systemd[1]: Started cri-containerd-86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7.scope - libcontainer container 86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7. Sep 11 00:29:24.470116 containerd[1591]: time="2025-09-11T00:29:24.469987895Z" level=info msg="StartContainer for \"86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7\" returns successfully" Sep 11 00:29:24.939881 containerd[1591]: time="2025-09-11T00:29:24.939808554Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:29:24.940958 containerd[1591]: time="2025-09-11T00:29:24.940892868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:29:24.942638 containerd[1591]: time="2025-09-11T00:29:24.942606122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 617.410246ms" Sep 11 00:29:24.942725 containerd[1591]: time="2025-09-11T00:29:24.942640236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:29:24.944772 containerd[1591]: time="2025-09-11T00:29:24.944732633Z" level=info msg="CreateContainer within sandbox \"c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:29:24.953809 containerd[1591]: time="2025-09-11T00:29:24.953755251Z" level=info msg="Container c34cfde57c3bf4f2c28f2c98b4954c6be7a8dd8921eadb92c17be8ad0eb135a7: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:29:24.961935 containerd[1591]: time="2025-09-11T00:29:24.961881369Z" level=info msg="CreateContainer within sandbox \"c346efcd337a24e1b480f64b142fd7c29528a7ea941b4e598947065e1b7e717d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c34cfde57c3bf4f2c28f2c98b4954c6be7a8dd8921eadb92c17be8ad0eb135a7\"" Sep 11 00:29:24.962752 containerd[1591]: time="2025-09-11T00:29:24.962369874Z" level=info msg="StartContainer for \"c34cfde57c3bf4f2c28f2c98b4954c6be7a8dd8921eadb92c17be8ad0eb135a7\"" Sep 11 00:29:24.963456 containerd[1591]: time="2025-09-11T00:29:24.963424434Z" level=info msg="connecting to shim c34cfde57c3bf4f2c28f2c98b4954c6be7a8dd8921eadb92c17be8ad0eb135a7" address="unix:///run/containerd/s/f630c43e7c32292ac8f7423fa3834c748222c2464867c7a2dce17eedfa2a537d" protocol=ttrpc version=3 Sep 11 00:29:24.992905 systemd[1]: Started cri-containerd-c34cfde57c3bf4f2c28f2c98b4954c6be7a8dd8921eadb92c17be8ad0eb135a7.scope - libcontainer container c34cfde57c3bf4f2c28f2c98b4954c6be7a8dd8921eadb92c17be8ad0eb135a7. Sep 11 00:29:25.046264 containerd[1591]: time="2025-09-11T00:29:25.046218704Z" level=info msg="StartContainer for \"c34cfde57c3bf4f2c28f2c98b4954c6be7a8dd8921eadb92c17be8ad0eb135a7\" returns successfully" Sep 11 00:29:25.270777 kubelet[2711]: I0911 00:29:25.270619 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-gpd4x" podStartSLOduration=27.253490683 podStartE2EDuration="38.270600186s" podCreationTimestamp="2025-09-11 00:28:47 +0000 UTC" firstStartedPulling="2025-09-11 00:29:13.307084323 +0000 UTC m=+42.493435217" lastFinishedPulling="2025-09-11 00:29:24.324193826 +0000 UTC m=+53.510544720" observedRunningTime="2025-09-11 00:29:25.253393553 +0000 UTC m=+54.439744447" watchObservedRunningTime="2025-09-11 00:29:25.270600186 +0000 UTC m=+54.456951080" Sep 11 00:29:25.270777 kubelet[2711]: I0911 00:29:25.270748 2711 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d877dc4f-2qzk9" podStartSLOduration=28.721371628 podStartE2EDuration="40.270744997s" podCreationTimestamp="2025-09-11 00:28:45 +0000 UTC" firstStartedPulling="2025-09-11 00:29:13.393932076 +0000 UTC m=+42.580282970" lastFinishedPulling="2025-09-11 00:29:24.943305445 +0000 UTC m=+54.129656339" observedRunningTime="2025-09-11 00:29:25.27026646 +0000 UTC m=+54.456617364" watchObservedRunningTime="2025-09-11 00:29:25.270744997 +0000 UTC m=+54.457095891" Sep 11 00:29:25.339807 containerd[1591]: time="2025-09-11T00:29:25.339697294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7\" id:\"7b32cad42ab845ca6a6b87f5409463f56f71b1d863c19265db432686ecfaba72\" pid:5506 exit_status:1 exited_at:{seconds:1757550565 nanos:339207586}" Sep 11 00:29:26.331398 containerd[1591]: time="2025-09-11T00:29:26.331328345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7\" id:\"09206ffd2a3fb8fa75991159f08c6ce3695aacca7c7fecfe067250281b358883\" pid:5543 exit_status:1 exited_at:{seconds:1757550566 nanos:330885365}" Sep 11 00:29:28.789380 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:57866.service - OpenSSH per-connection server daemon (10.0.0.1:57866). Sep 11 00:29:28.865646 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 57866 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:28.867601 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:28.874065 systemd-logind[1565]: New session 11 of user core. Sep 11 00:29:28.879903 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:29:29.012388 sshd[5559]: Connection closed by 10.0.0.1 port 57866 Sep 11 00:29:29.012788 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:29.024690 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:57866.service: Deactivated successfully. Sep 11 00:29:29.027195 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:29:29.028136 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:29:29.032711 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:57872.service - OpenSSH per-connection server daemon (10.0.0.1:57872). Sep 11 00:29:29.033782 systemd-logind[1565]: Removed session 11. Sep 11 00:29:29.086412 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 57872 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:29.088185 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:29.093147 systemd-logind[1565]: New session 12 of user core. Sep 11 00:29:29.106804 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:29:29.270857 sshd[5575]: Connection closed by 10.0.0.1 port 57872 Sep 11 00:29:29.272003 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:29.283399 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:57872.service: Deactivated successfully. Sep 11 00:29:29.287591 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:29:29.292006 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:29:29.295583 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:57888.service - OpenSSH per-connection server daemon (10.0.0.1:57888). Sep 11 00:29:29.297963 systemd-logind[1565]: Removed session 12. Sep 11 00:29:29.347156 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 57888 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:29.349013 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:29.354044 systemd-logind[1565]: New session 13 of user core. Sep 11 00:29:29.364833 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:29:29.497407 sshd[5588]: Connection closed by 10.0.0.1 port 57888 Sep 11 00:29:29.497767 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:29.502985 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:57888.service: Deactivated successfully. Sep 11 00:29:29.505182 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:29:29.506169 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:29:29.507619 systemd-logind[1565]: Removed session 13. Sep 11 00:29:34.513986 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:50800.service - OpenSSH per-connection server daemon (10.0.0.1:50800). Sep 11 00:29:34.570042 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 50800 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:34.571391 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:34.575967 systemd-logind[1565]: New session 14 of user core. Sep 11 00:29:34.585820 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:29:34.703420 sshd[5612]: Connection closed by 10.0.0.1 port 50800 Sep 11 00:29:34.703739 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:34.707853 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:50800.service: Deactivated successfully. Sep 11 00:29:34.710109 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:29:34.711008 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:29:34.712304 systemd-logind[1565]: Removed session 14. Sep 11 00:29:37.253438 containerd[1591]: time="2025-09-11T00:29:37.253368983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\" id:\"3fce74f3b8a39091d55a75e8655a2f9928caf0a27f5bdd8cd0d41035cfb12e5e\" pid:5648 exited_at:{seconds:1757550577 nanos:252895469}" Sep 11 00:29:39.717861 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:50812.service - OpenSSH per-connection server daemon (10.0.0.1:50812). Sep 11 00:29:39.796414 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 50812 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:39.798326 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:39.803038 systemd-logind[1565]: New session 15 of user core. Sep 11 00:29:39.810837 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:29:39.933912 sshd[5665]: Connection closed by 10.0.0.1 port 50812 Sep 11 00:29:39.934244 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:39.938757 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:50812.service: Deactivated successfully. Sep 11 00:29:39.941113 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:29:39.942012 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:29:39.943747 systemd-logind[1565]: Removed session 15. Sep 11 00:29:44.947512 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:52962.service - OpenSSH per-connection server daemon (10.0.0.1:52962). Sep 11 00:29:45.014625 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 52962 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:45.016317 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:45.021104 systemd-logind[1565]: New session 16 of user core. Sep 11 00:29:45.028838 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:29:45.141192 sshd[5681]: Connection closed by 10.0.0.1 port 52962 Sep 11 00:29:45.141501 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:45.146191 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:52962.service: Deactivated successfully. Sep 11 00:29:45.148263 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:29:45.149178 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:29:45.150824 systemd-logind[1565]: Removed session 16. Sep 11 00:29:50.154247 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:33824.service - OpenSSH per-connection server daemon (10.0.0.1:33824). Sep 11 00:29:50.229394 sshd[5695]: Accepted publickey for core from 10.0.0.1 port 33824 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:50.232554 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:50.238096 systemd-logind[1565]: New session 17 of user core. Sep 11 00:29:50.245895 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:29:50.271253 containerd[1591]: time="2025-09-11T00:29:50.271205310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ff88d932ce4d573ec4b1d1ce820fa473ad0c057ac3c0cb03af95e38682218b4\" id:\"e894d94ba257a429d4d5bb5a461cbefea66a669b1fbca864e81e6508804ea07d\" pid:5709 exited_at:{seconds:1757550590 nanos:270848086}" Sep 11 00:29:50.376469 sshd[5715]: Connection closed by 10.0.0.1 port 33824 Sep 11 00:29:50.378130 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:50.387628 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:33824.service: Deactivated successfully. Sep 11 00:29:50.390724 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:29:50.391764 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:29:50.396916 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:33830.service - OpenSSH per-connection server daemon (10.0.0.1:33830). Sep 11 00:29:50.397626 systemd-logind[1565]: Removed session 17. Sep 11 00:29:50.446035 sshd[5733]: Accepted publickey for core from 10.0.0.1 port 33830 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:50.447694 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:50.453223 systemd-logind[1565]: New session 18 of user core. Sep 11 00:29:50.462889 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:29:50.750540 sshd[5735]: Connection closed by 10.0.0.1 port 33830 Sep 11 00:29:50.751035 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:50.761585 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:33830.service: Deactivated successfully. Sep 11 00:29:50.763822 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:29:50.764573 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:29:50.768006 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:33834.service - OpenSSH per-connection server daemon (10.0.0.1:33834). Sep 11 00:29:50.769265 systemd-logind[1565]: Removed session 18. Sep 11 00:29:50.826634 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 33834 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:50.828063 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:50.832673 systemd-logind[1565]: New session 19 of user core. Sep 11 00:29:50.844800 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:29:51.351550 sshd[5749]: Connection closed by 10.0.0.1 port 33834 Sep 11 00:29:51.351873 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:51.365782 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:33834.service: Deactivated successfully. Sep 11 00:29:51.371351 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:29:51.372822 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:29:51.376256 systemd-logind[1565]: Removed session 19. Sep 11 00:29:51.379931 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:33850.service - OpenSSH per-connection server daemon (10.0.0.1:33850). Sep 11 00:29:51.428132 sshd[5769]: Accepted publickey for core from 10.0.0.1 port 33850 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:51.429519 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:51.433929 systemd-logind[1565]: New session 20 of user core. Sep 11 00:29:51.445797 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:29:51.661581 sshd[5771]: Connection closed by 10.0.0.1 port 33850 Sep 11 00:29:51.661940 sshd-session[5769]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:51.672142 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:33850.service: Deactivated successfully. Sep 11 00:29:51.675535 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:29:51.676969 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:29:51.681735 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:33864.service - OpenSSH per-connection server daemon (10.0.0.1:33864). Sep 11 00:29:51.682454 systemd-logind[1565]: Removed session 20. Sep 11 00:29:51.732542 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 33864 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:51.734401 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:51.742034 systemd-logind[1565]: New session 21 of user core. Sep 11 00:29:51.755931 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:29:51.954025 sshd[5784]: Connection closed by 10.0.0.1 port 33864 Sep 11 00:29:51.954352 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:51.960497 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:33864.service: Deactivated successfully. Sep 11 00:29:51.963346 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:29:51.964950 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:29:51.968083 systemd-logind[1565]: Removed session 21. Sep 11 00:29:54.762341 kubelet[2711]: I0911 00:29:54.762277 2711 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:29:56.339961 containerd[1591]: time="2025-09-11T00:29:56.339905618Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86f6caef52aeaee5ce9bd3541a86368f8723f84ef9b04c0ad3044cdca01876c7\" id:\"39bfe72cb6b86678f92b52d5e78a504e0d33bc4e9472bf7e89d1cc19fd2ea45c\" pid:5819 exited_at:{seconds:1757550596 nanos:339512979}" Sep 11 00:29:56.972193 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:33872.service - OpenSSH per-connection server daemon (10.0.0.1:33872). Sep 11 00:29:57.041512 sshd[5836]: Accepted publickey for core from 10.0.0.1 port 33872 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:29:57.043628 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:29:57.048950 systemd-logind[1565]: New session 22 of user core. Sep 11 00:29:57.063810 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:29:57.185116 sshd[5838]: Connection closed by 10.0.0.1 port 33872 Sep 11 00:29:57.185452 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Sep 11 00:29:57.190132 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:33872.service: Deactivated successfully. Sep 11 00:29:57.192476 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:29:57.193260 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:29:57.194636 systemd-logind[1565]: Removed session 22. Sep 11 00:30:02.197803 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:54088.service - OpenSSH per-connection server daemon (10.0.0.1:54088). Sep 11 00:30:02.256431 sshd[5852]: Accepted publickey for core from 10.0.0.1 port 54088 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:02.258407 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:02.263799 systemd-logind[1565]: New session 23 of user core. Sep 11 00:30:02.271874 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:30:02.390809 sshd[5854]: Connection closed by 10.0.0.1 port 54088 Sep 11 00:30:02.391155 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:02.395769 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:54088.service: Deactivated successfully. Sep 11 00:30:02.397951 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:30:02.398832 systemd-logind[1565]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:30:02.399986 systemd-logind[1565]: Removed session 23. Sep 11 00:30:07.245138 containerd[1591]: time="2025-09-11T00:30:07.245083241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb750c4224ed3d9da236ba59afc10aa6a0ded879735ed2d07bb066944fefccf0\" id:\"0b5f8a53c4bf1cb6f528ab9225490628fdf33176bb5ee3e9e1c68cd31611b140\" pid:5878 exited_at:{seconds:1757550607 nanos:244635751}" Sep 11 00:30:07.409284 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:54098.service - OpenSSH per-connection server daemon (10.0.0.1:54098). Sep 11 00:30:07.477639 sshd[5891]: Accepted publickey for core from 10.0.0.1 port 54098 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:07.479967 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:07.488620 systemd-logind[1565]: New session 24 of user core. Sep 11 00:30:07.499959 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 00:30:07.665166 sshd[5893]: Connection closed by 10.0.0.1 port 54098 Sep 11 00:30:07.665544 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:07.670942 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:54098.service: Deactivated successfully. Sep 11 00:30:07.673418 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 00:30:07.675197 systemd-logind[1565]: Session 24 logged out. Waiting for processes to exit. Sep 11 00:30:07.676480 systemd-logind[1565]: Removed session 24. Sep 11 00:30:12.680512 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:33570.service - OpenSSH per-connection server daemon (10.0.0.1:33570). Sep 11 00:30:12.739844 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 33570 ssh2: RSA SHA256:2FKl6F/CXYpU0+lRtBl6FqtyyB7NBzEoeS8HPkzCick Sep 11 00:30:12.741777 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:30:12.747726 systemd-logind[1565]: New session 25 of user core. Sep 11 00:30:12.752828 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 00:30:12.888516 sshd[5910]: Connection closed by 10.0.0.1 port 33570 Sep 11 00:30:12.888915 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Sep 11 00:30:12.893321 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:33570.service: Deactivated successfully. Sep 11 00:30:12.895806 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 00:30:12.896802 systemd-logind[1565]: Session 25 logged out. Waiting for processes to exit. Sep 11 00:30:12.898399 systemd-logind[1565]: Removed session 25.